00:00:00.000 Started by upstream project "autotest-per-patch" build number 127181 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.142 Fetching changes from the remote Git repository 00:00:00.143 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.219 > git --version # 'git version 2.39.2' 00:00:00.219 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.480 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.490 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.501 Checking out Revision 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b (FETCH_HEAD) 00:00:08.501 > git config core.sparsecheckout # timeout=10 00:00:08.510 > git read-tree -mu HEAD # timeout=10 00:00:08.524 > git checkout -f 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=5 00:00:08.540 Commit message: "jjb/jobs: add SPDK_TEST_SETUP flag into configuration" 00:00:08.540 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:08.629 [Pipeline] Start of Pipeline 00:00:08.645 [Pipeline] library 00:00:08.647 Loading library shm_lib@master 00:00:08.647 Library shm_lib@master is cached. Copying from home. 00:00:08.662 [Pipeline] node 00:00:08.670 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:08.672 [Pipeline] { 00:00:08.685 [Pipeline] catchError 00:00:08.686 [Pipeline] { 00:00:08.699 [Pipeline] wrap 00:00:08.706 [Pipeline] { 00:00:08.712 [Pipeline] stage 00:00:08.713 [Pipeline] { (Prologue) 00:00:08.730 [Pipeline] echo 00:00:08.732 Node: VM-host-SM17 00:00:08.738 [Pipeline] cleanWs 00:00:08.746 [WS-CLEANUP] Deleting project workspace... 00:00:08.746 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.752 [WS-CLEANUP] done 00:00:08.908 [Pipeline] setCustomBuildProperty 00:00:08.969 [Pipeline] httpRequest 00:00:08.999 [Pipeline] echo 00:00:09.001 Sorcerer 10.211.164.101 is alive 00:00:09.010 [Pipeline] httpRequest 00:00:09.014 HttpMethod: GET 00:00:09.014 URL: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:09.015 Sending request to url: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:09.016 Response Code: HTTP/1.1 200 OK 00:00:09.017 Success: Status code 200 is in the accepted range: 200,404 00:00:09.017 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:10.133 [Pipeline] sh 00:00:10.413 + tar --no-same-owner -xf jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:10.429 [Pipeline] httpRequest 00:00:10.447 [Pipeline] echo 00:00:10.449 Sorcerer 10.211.164.101 is alive 00:00:10.458 [Pipeline] httpRequest 00:00:10.462 HttpMethod: GET 00:00:10.463 URL: http://10.211.164.101/packages/spdk_50fa6ca312c882d0fe919228ad8d3bfd61579d43.tar.gz 00:00:10.463 Sending request to url: http://10.211.164.101/packages/spdk_50fa6ca312c882d0fe919228ad8d3bfd61579d43.tar.gz 00:00:10.468 Response Code: HTTP/1.1 200 OK 00:00:10.468 Success: Status code 200 is in the accepted range: 200,404 00:00:10.469 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_50fa6ca312c882d0fe919228ad8d3bfd61579d43.tar.gz 00:01:25.778 [Pipeline] sh 00:01:26.056 + tar --no-same-owner -xf spdk_50fa6ca312c882d0fe919228ad8d3bfd61579d43.tar.gz 00:01:28.599 [Pipeline] sh 00:01:28.878 + git -C spdk log --oneline -n5 00:01:28.879 50fa6ca31 raid: allow to skip rebuild when adding a base bdev 00:01:28.879 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:28.879 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:28.879 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:28.879 d005e023b raid: fix empty slot not updated in sb after resize 00:01:28.898 [Pipeline] writeFile 00:01:28.915 [Pipeline] sh 00:01:29.198 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:29.210 [Pipeline] sh 00:01:29.525 + cat autorun-spdk.conf 00:01:29.525 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.525 SPDK_TEST_NVMF=1 00:01:29.525 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.525 SPDK_TEST_URING=1 00:01:29.525 SPDK_TEST_USDT=1 00:01:29.525 SPDK_RUN_UBSAN=1 00:01:29.525 NET_TYPE=virt 00:01:29.525 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.532 RUN_NIGHTLY=0 00:01:29.534 [Pipeline] } 00:01:29.550 [Pipeline] // stage 00:01:29.564 [Pipeline] stage 00:01:29.566 [Pipeline] { (Run VM) 00:01:29.580 [Pipeline] sh 00:01:29.861 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:29.861 + echo 'Start stage prepare_nvme.sh' 00:01:29.861 Start stage prepare_nvme.sh 00:01:29.861 + [[ -n 6 ]] 00:01:29.861 + disk_prefix=ex6 00:01:29.861 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:01:29.861 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:01:29.861 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:01:29.861 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.861 ++ SPDK_TEST_NVMF=1 00:01:29.861 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.861 ++ SPDK_TEST_URING=1 00:01:29.861 ++ SPDK_TEST_USDT=1 00:01:29.861 ++ SPDK_RUN_UBSAN=1 00:01:29.861 ++ NET_TYPE=virt 00:01:29.861 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.861 ++ RUN_NIGHTLY=0 00:01:29.861 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:29.861 + nvme_files=() 00:01:29.861 + declare -A nvme_files 00:01:29.861 + backend_dir=/var/lib/libvirt/images/backends 00:01:29.861 + nvme_files['nvme.img']=5G 00:01:29.861 + nvme_files['nvme-cmb.img']=5G 00:01:29.861 + nvme_files['nvme-multi0.img']=4G 00:01:29.861 + nvme_files['nvme-multi1.img']=4G 00:01:29.861 + nvme_files['nvme-multi2.img']=4G 00:01:29.861 + nvme_files['nvme-openstack.img']=8G 00:01:29.861 + nvme_files['nvme-zns.img']=5G 00:01:29.861 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:29.861 + (( SPDK_TEST_FTL == 1 )) 00:01:29.861 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:29.861 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:29.861 + for nvme in "${!nvme_files[@]}" 00:01:29.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:29.861 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.861 + for nvme in "${!nvme_files[@]}" 00:01:29.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:29.861 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.861 + for nvme in "${!nvme_files[@]}" 00:01:29.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:29.861 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:29.861 + for nvme in "${!nvme_files[@]}" 00:01:29.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:29.861 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.861 + for nvme in "${!nvme_files[@]}" 00:01:29.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:29.861 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.861 + for nvme in "${!nvme_files[@]}" 00:01:29.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:29.861 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.861 + for nvme in "${!nvme_files[@]}" 00:01:29.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:29.861 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.861 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:29.861 + echo 'End stage prepare_nvme.sh' 00:01:29.861 End stage prepare_nvme.sh 00:01:29.872 [Pipeline] sh 00:01:30.152 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:30.152 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora38 00:01:30.152 00:01:30.152 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:01:30.152 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:01:30.152 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:30.152 HELP=0 00:01:30.152 DRY_RUN=0 00:01:30.152 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:30.152 NVME_DISKS_TYPE=nvme,nvme, 00:01:30.152 NVME_AUTO_CREATE=0 00:01:30.152 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:30.152 NVME_CMB=,, 00:01:30.152 NVME_PMR=,, 00:01:30.152 NVME_ZNS=,, 00:01:30.152 NVME_MS=,, 00:01:30.152 NVME_FDP=,, 00:01:30.152 SPDK_VAGRANT_DISTRO=fedora38 00:01:30.152 SPDK_VAGRANT_VMCPU=10 00:01:30.152 SPDK_VAGRANT_VMRAM=12288 00:01:30.152 SPDK_VAGRANT_PROVIDER=libvirt 00:01:30.152 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:30.152 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:30.152 SPDK_OPENSTACK_NETWORK=0 00:01:30.152 VAGRANT_PACKAGE_BOX=0 00:01:30.152 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:30.152 FORCE_DISTRO=true 00:01:30.152 VAGRANT_BOX_VERSION= 00:01:30.152 EXTRA_VAGRANTFILES= 00:01:30.152 NIC_MODEL=e1000 00:01:30.152 00:01:30.152 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt' 00:01:30.152 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:33.442 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.023 ==> default: Creating image (snapshot of base box volume). 00:01:34.023 ==> default: Creating domain with the following settings... 00:01:34.023 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721915062_91009c6bede83cbca37a 00:01:34.023 ==> default: -- Domain type: kvm 00:01:34.023 ==> default: -- Cpus: 10 00:01:34.023 ==> default: -- Feature: acpi 00:01:34.023 ==> default: -- Feature: apic 00:01:34.023 ==> default: -- Feature: pae 00:01:34.023 ==> default: -- Memory: 12288M 00:01:34.023 ==> default: -- Memory Backing: hugepages: 00:01:34.023 ==> default: -- Management MAC: 00:01:34.023 ==> default: -- Loader: 00:01:34.023 ==> default: -- Nvram: 00:01:34.023 ==> default: -- Base box: spdk/fedora38 00:01:34.023 ==> default: -- Storage pool: default 00:01:34.023 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721915062_91009c6bede83cbca37a.img (20G) 00:01:34.023 ==> default: -- Volume Cache: default 00:01:34.023 ==> default: -- Kernel: 00:01:34.023 ==> default: -- Initrd: 00:01:34.023 ==> default: -- Graphics Type: vnc 00:01:34.023 ==> default: -- Graphics Port: -1 00:01:34.023 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.024 ==> default: -- Graphics Password: Not defined 00:01:34.024 ==> default: -- Video Type: cirrus 00:01:34.024 ==> default: -- Video VRAM: 9216 00:01:34.024 ==> default: -- Sound Type: 00:01:34.024 ==> default: -- Keymap: en-us 00:01:34.024 ==> default: -- TPM Path: 00:01:34.024 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.024 ==> default: -- Command line args: 00:01:34.024 ==> default: -> value=-device, 00:01:34.024 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:34.024 ==> default: -> value=-drive, 00:01:34.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:34.024 ==> default: -> value=-device, 00:01:34.024 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.024 ==> default: -> value=-device, 00:01:34.024 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:34.024 ==> default: -> value=-drive, 00:01:34.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:34.024 ==> default: -> value=-device, 00:01:34.024 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.024 ==> default: -> value=-drive, 00:01:34.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:34.024 ==> default: -> value=-device, 00:01:34.024 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.024 ==> default: -> value=-drive, 00:01:34.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:34.024 ==> default: -> value=-device, 00:01:34.024 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.024 ==> default: Creating shared folders metadata... 00:01:34.024 ==> default: Starting domain. 00:01:35.937 ==> default: Waiting for domain to get an IP address... 00:01:54.033 ==> default: Waiting for SSH to become available... 00:01:54.033 ==> default: Configuring and enabling network interfaces... 00:01:56.577 default: SSH address: 192.168.121.103:22 00:01:56.577 default: SSH username: vagrant 00:01:56.577 default: SSH auth method: private key 00:01:58.555 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.676 ==> default: Mounting SSHFS shared folder... 00:02:08.054 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.054 ==> default: Checking Mount.. 00:02:08.990 ==> default: Folder Successfully Mounted! 00:02:08.990 ==> default: Running provisioner: file... 00:02:09.926 default: ~/.gitconfig => .gitconfig 00:02:10.494 00:02:10.494 SUCCESS! 00:02:10.494 00:02:10.494 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:02:10.494 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:10.494 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:02:10.494 00:02:10.503 [Pipeline] } 00:02:10.521 [Pipeline] // stage 00:02:10.529 [Pipeline] dir 00:02:10.529 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt 00:02:10.530 [Pipeline] { 00:02:10.542 [Pipeline] catchError 00:02:10.544 [Pipeline] { 00:02:10.559 [Pipeline] sh 00:02:10.837 + vagrant ssh-config --host vagrant 00:02:10.837 + sed -ne /^Host/,$p 00:02:10.837 + tee ssh_conf 00:02:15.023 Host vagrant 00:02:15.023 HostName 192.168.121.103 00:02:15.023 User vagrant 00:02:15.023 Port 22 00:02:15.023 UserKnownHostsFile /dev/null 00:02:15.023 StrictHostKeyChecking no 00:02:15.023 PasswordAuthentication no 00:02:15.023 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:15.023 IdentitiesOnly yes 00:02:15.023 LogLevel FATAL 00:02:15.023 ForwardAgent yes 00:02:15.023 ForwardX11 yes 00:02:15.023 00:02:15.036 [Pipeline] withEnv 00:02:15.038 [Pipeline] { 00:02:15.053 [Pipeline] sh 00:02:15.328 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:15.328 source /etc/os-release 00:02:15.328 [[ -e /image.version ]] && img=$(< /image.version) 00:02:15.328 # Minimal, systemd-like check. 00:02:15.328 if [[ -e /.dockerenv ]]; then 00:02:15.328 # Clear garbage from the node's name: 00:02:15.328 # agt-er_autotest_547-896 -> autotest_547-896 00:02:15.328 # $HOSTNAME is the actual container id 00:02:15.328 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:15.328 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:15.328 # We can assume this is a mount from a host where container is running, 00:02:15.328 # so fetch its hostname to easily identify the target swarm worker. 00:02:15.328 container="$(< /etc/hostname) ($agent)" 00:02:15.328 else 00:02:15.328 # Fallback 00:02:15.328 container=$agent 00:02:15.328 fi 00:02:15.328 fi 00:02:15.328 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:15.328 00:02:15.338 [Pipeline] } 00:02:15.355 [Pipeline] // withEnv 00:02:15.363 [Pipeline] setCustomBuildProperty 00:02:15.374 [Pipeline] stage 00:02:15.375 [Pipeline] { (Tests) 00:02:15.385 [Pipeline] sh 00:02:15.660 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:15.932 [Pipeline] sh 00:02:16.212 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:16.485 [Pipeline] timeout 00:02:16.486 Timeout set to expire in 30 min 00:02:16.488 [Pipeline] { 00:02:16.503 [Pipeline] sh 00:02:16.781 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:17.349 HEAD is now at 50fa6ca31 raid: allow to skip rebuild when adding a base bdev 00:02:17.362 [Pipeline] sh 00:02:17.643 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:17.915 [Pipeline] sh 00:02:18.200 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:18.484 [Pipeline] sh 00:02:18.764 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:19.022 ++ readlink -f spdk_repo 00:02:19.022 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:19.022 + [[ -n /home/vagrant/spdk_repo ]] 00:02:19.022 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:19.022 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:19.022 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:19.022 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:19.022 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:19.022 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:19.022 + cd /home/vagrant/spdk_repo 00:02:19.022 + source /etc/os-release 00:02:19.022 ++ NAME='Fedora Linux' 00:02:19.022 ++ VERSION='38 (Cloud Edition)' 00:02:19.022 ++ ID=fedora 00:02:19.022 ++ VERSION_ID=38 00:02:19.022 ++ VERSION_CODENAME= 00:02:19.022 ++ PLATFORM_ID=platform:f38 00:02:19.022 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:19.022 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:19.022 ++ LOGO=fedora-logo-icon 00:02:19.022 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:19.022 ++ HOME_URL=https://fedoraproject.org/ 00:02:19.022 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:19.022 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:19.022 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:19.022 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:19.022 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:19.022 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:19.022 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:19.022 ++ SUPPORT_END=2024-05-14 00:02:19.022 ++ VARIANT='Cloud Edition' 00:02:19.022 ++ VARIANT_ID=cloud 00:02:19.022 + uname -a 00:02:19.022 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:19.022 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:19.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:19.540 Hugepages 00:02:19.540 node hugesize free / total 00:02:19.540 node0 1048576kB 0 / 0 00:02:19.540 node0 2048kB 0 / 0 00:02:19.540 00:02:19.540 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.540 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:19.540 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:19.540 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:19.540 + rm -f /tmp/spdk-ld-path 00:02:19.540 + source autorun-spdk.conf 00:02:19.540 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.540 ++ SPDK_TEST_NVMF=1 00:02:19.540 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:19.540 ++ SPDK_TEST_URING=1 00:02:19.540 ++ SPDK_TEST_USDT=1 00:02:19.540 ++ SPDK_RUN_UBSAN=1 00:02:19.540 ++ NET_TYPE=virt 00:02:19.540 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.540 ++ RUN_NIGHTLY=0 00:02:19.540 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:19.540 + [[ -n '' ]] 00:02:19.540 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:19.540 + for M in /var/spdk/build-*-manifest.txt 00:02:19.540 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:19.540 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.540 + for M in /var/spdk/build-*-manifest.txt 00:02:19.540 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:19.540 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.540 ++ uname 00:02:19.540 + [[ Linux == \L\i\n\u\x ]] 00:02:19.540 + sudo dmesg -T 00:02:19.540 + sudo dmesg --clear 00:02:19.540 + dmesg_pid=5101 00:02:19.540 + sudo dmesg -Tw 00:02:19.540 + [[ Fedora Linux == FreeBSD ]] 00:02:19.540 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.540 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.540 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:19.540 + [[ -x /usr/src/fio-static/fio ]] 00:02:19.540 + export FIO_BIN=/usr/src/fio-static/fio 00:02:19.540 + FIO_BIN=/usr/src/fio-static/fio 00:02:19.540 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:19.540 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:19.540 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:19.540 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.540 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.540 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:19.540 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.540 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.540 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.540 Test configuration: 00:02:19.540 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.540 SPDK_TEST_NVMF=1 00:02:19.540 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:19.540 SPDK_TEST_URING=1 00:02:19.540 SPDK_TEST_USDT=1 00:02:19.540 SPDK_RUN_UBSAN=1 00:02:19.540 NET_TYPE=virt 00:02:19.540 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.799 RUN_NIGHTLY=0 13:45:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:19.799 13:45:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:19.799 13:45:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.799 13:45:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.799 13:45:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.799 13:45:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.799 13:45:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.799 13:45:08 -- paths/export.sh@5 -- $ export PATH 00:02:19.799 13:45:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.799 13:45:08 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:19.799 13:45:08 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:19.799 13:45:08 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721915108.XXXXXX 00:02:19.799 13:45:08 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721915108.Nf9dOd 00:02:19.799 13:45:08 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:19.799 13:45:08 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:19.799 13:45:08 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:19.799 13:45:08 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:19.799 13:45:08 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:19.799 13:45:08 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:19.799 13:45:08 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:19.799 13:45:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.799 13:45:08 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:19.799 13:45:08 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:19.799 13:45:08 -- pm/common@17 -- $ local monitor 00:02:19.799 13:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.799 13:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.799 13:45:08 -- pm/common@25 -- $ sleep 1 00:02:19.799 13:45:08 -- pm/common@21 -- $ date +%s 00:02:19.799 13:45:08 -- pm/common@21 -- $ date +%s 00:02:19.799 13:45:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721915108 00:02:19.799 13:45:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721915108 00:02:19.799 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721915108_collect-vmstat.pm.log 00:02:19.799 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721915108_collect-cpu-load.pm.log 00:02:20.735 13:45:09 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:20.735 13:45:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.735 13:45:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.735 13:45:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:20.735 13:45:09 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.735 Thu Jul 25 01:45:09 PM UTC 2024 00:02:20.735 13:45:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.735 v24.09-pre-322-g50fa6ca31 00:02:20.735 13:45:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:20.735 13:45:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:20.735 13:45:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:20.735 13:45:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:20.735 13:45:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:20.735 13:45:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.735 ************************************ 00:02:20.735 START TEST ubsan 00:02:20.735 ************************************ 00:02:20.735 using ubsan 00:02:20.735 13:45:09 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:20.735 00:02:20.735 real 0m0.000s 00:02:20.735 user 0m0.000s 00:02:20.735 sys 0m0.000s 00:02:20.735 13:45:09 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:20.735 ************************************ 00:02:20.735 END TEST ubsan 00:02:20.735 13:45:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:20.735 ************************************ 00:02:20.735 13:45:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:20.735 13:45:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:20.735 13:45:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:20.735 13:45:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:20.735 13:45:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:20.735 13:45:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:20.735 13:45:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:20.735 13:45:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:20.735 13:45:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:20.993 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:20.993 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.560 Using 'verbs' RDMA provider 00:02:37.392 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:49.597 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:49.597 Creating mk/config.mk...done. 00:02:49.597 Creating mk/cc.flags.mk...done. 00:02:49.597 Type 'make' to build. 00:02:49.597 13:45:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:49.597 13:45:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:49.597 13:45:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:49.597 13:45:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.597 ************************************ 00:02:49.597 START TEST make 00:02:49.597 ************************************ 00:02:49.597 13:45:37 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:49.597 make[1]: Nothing to be done for 'all'. 00:02:59.586 The Meson build system 00:02:59.586 Version: 1.3.1 00:02:59.586 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:59.586 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:59.586 Build type: native build 00:02:59.586 Program cat found: YES (/usr/bin/cat) 00:02:59.586 Project name: DPDK 00:02:59.586 Project version: 24.03.0 00:02:59.586 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:59.586 C linker for the host machine: cc ld.bfd 2.39-16 00:02:59.586 Host machine cpu family: x86_64 00:02:59.586 Host machine cpu: x86_64 00:02:59.586 Message: ## Building in Developer Mode ## 00:02:59.586 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:59.586 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:59.586 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.586 Program python3 found: YES (/usr/bin/python3) 00:02:59.586 Program cat found: YES (/usr/bin/cat) 00:02:59.586 Compiler for C supports arguments -march=native: YES 00:02:59.586 Checking for size of "void *" : 8 00:02:59.586 Checking for size of "void *" : 8 (cached) 00:02:59.586 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:59.586 Library m found: YES 00:02:59.586 Library numa found: YES 00:02:59.586 Has header "numaif.h" : YES 00:02:59.586 Library fdt found: NO 00:02:59.586 Library execinfo found: NO 00:02:59.586 Has header "execinfo.h" : YES 00:02:59.586 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:59.586 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.586 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.586 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.586 Run-time dependency openssl found: YES 3.0.9 00:02:59.586 Run-time dependency libpcap found: YES 1.10.4 00:02:59.586 Has header "pcap.h" with dependency libpcap: YES 00:02:59.586 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.586 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.586 Compiler for C supports arguments -Wformat: YES 00:02:59.586 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:59.586 Compiler for C supports arguments -Wformat-security: NO 00:02:59.586 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.586 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.586 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.586 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.586 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.586 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.586 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.586 Compiler for C supports arguments -Wundef: YES 00:02:59.586 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.586 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.586 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:59.586 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.586 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:59.586 Program objdump found: YES (/usr/bin/objdump) 00:02:59.586 Compiler for C supports arguments -mavx512f: YES 00:02:59.586 Checking if "AVX512 checking" compiles: YES 00:02:59.586 Fetching value of define "__SSE4_2__" : 1 00:02:59.586 Fetching value of define "__AES__" : 1 00:02:59.586 Fetching value of define "__AVX__" : 1 00:02:59.586 Fetching value of define "__AVX2__" : 1 00:02:59.586 Fetching value of define "__AVX512BW__" : (undefined) 00:02:59.586 Fetching value of define "__AVX512CD__" : (undefined) 00:02:59.586 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:59.586 Fetching value of define "__AVX512F__" : (undefined) 00:02:59.586 Fetching value of define "__AVX512VL__" : (undefined) 00:02:59.586 Fetching value of define "__PCLMUL__" : 1 00:02:59.586 Fetching value of define "__RDRND__" : 1 00:02:59.586 Fetching value of define "__RDSEED__" : 1 00:02:59.586 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:59.586 Fetching value of define "__znver1__" : (undefined) 00:02:59.586 Fetching value of define "__znver2__" : (undefined) 00:02:59.586 Fetching value of define "__znver3__" : (undefined) 00:02:59.586 Fetching value of define "__znver4__" : (undefined) 00:02:59.586 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:59.586 Message: lib/log: Defining dependency "log" 00:02:59.586 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.586 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.586 Checking for function "getentropy" : NO 00:02:59.586 Message: lib/eal: Defining dependency "eal" 00:02:59.586 Message: lib/ring: Defining dependency "ring" 00:02:59.586 Message: lib/rcu: Defining dependency "rcu" 00:02:59.586 Message: lib/mempool: Defining dependency "mempool" 00:02:59.586 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.586 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.586 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:59.586 Compiler for C supports arguments -mpclmul: YES 00:02:59.586 Compiler for C supports arguments -maes: YES 00:02:59.586 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.586 Compiler for C supports arguments -mavx512bw: YES 00:02:59.586 Compiler for C supports arguments -mavx512dq: YES 00:02:59.586 Compiler for C supports arguments -mavx512vl: YES 00:02:59.586 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.586 Compiler for C supports arguments -mavx2: YES 00:02:59.586 Compiler for C supports arguments -mavx: YES 00:02:59.586 Message: lib/net: Defining dependency "net" 00:02:59.586 Message: lib/meter: Defining dependency "meter" 00:02:59.586 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.586 Message: lib/pci: Defining dependency "pci" 00:02:59.586 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.586 Message: lib/hash: Defining dependency "hash" 00:02:59.586 Message: lib/timer: Defining dependency "timer" 00:02:59.586 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.586 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.586 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.586 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:59.586 Message: lib/power: Defining dependency "power" 00:02:59.586 Message: lib/reorder: Defining dependency "reorder" 00:02:59.586 Message: lib/security: Defining dependency "security" 00:02:59.586 Has header "linux/userfaultfd.h" : YES 00:02:59.586 Has header "linux/vduse.h" : YES 00:02:59.586 Message: lib/vhost: Defining dependency "vhost" 00:02:59.586 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.586 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.586 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.586 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.586 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:59.586 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:59.586 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:59.586 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:59.587 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:59.587 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:59.587 Program doxygen found: YES (/usr/bin/doxygen) 00:02:59.587 Configuring doxy-api-html.conf using configuration 00:02:59.587 Configuring doxy-api-man.conf using configuration 00:02:59.587 Program mandb found: YES (/usr/bin/mandb) 00:02:59.587 Program sphinx-build found: NO 00:02:59.587 Configuring rte_build_config.h using configuration 00:02:59.587 Message: 00:02:59.587 ================= 00:02:59.587 Applications Enabled 00:02:59.587 ================= 00:02:59.587 00:02:59.587 apps: 00:02:59.587 00:02:59.587 00:02:59.587 Message: 00:02:59.587 ================= 00:02:59.587 Libraries Enabled 00:02:59.587 ================= 00:02:59.587 00:02:59.587 libs: 00:02:59.587 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:59.587 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:59.587 cryptodev, dmadev, power, reorder, security, vhost, 00:02:59.587 00:02:59.587 Message: 00:02:59.587 =============== 00:02:59.587 Drivers Enabled 00:02:59.587 =============== 00:02:59.587 00:02:59.587 common: 00:02:59.587 00:02:59.587 bus: 00:02:59.587 pci, vdev, 00:02:59.587 mempool: 00:02:59.587 ring, 00:02:59.587 dma: 00:02:59.587 00:02:59.587 net: 00:02:59.587 00:02:59.587 crypto: 00:02:59.587 00:02:59.587 compress: 00:02:59.587 00:02:59.587 vdpa: 00:02:59.587 00:02:59.587 00:02:59.587 Message: 00:02:59.587 ================= 00:02:59.587 Content Skipped 00:02:59.587 ================= 00:02:59.587 00:02:59.587 apps: 00:02:59.587 dumpcap: explicitly disabled via build config 00:02:59.587 graph: explicitly disabled via build config 00:02:59.587 pdump: explicitly disabled via build config 00:02:59.587 proc-info: explicitly disabled via build config 00:02:59.587 test-acl: explicitly disabled via build config 00:02:59.587 test-bbdev: explicitly disabled via build config 00:02:59.587 test-cmdline: explicitly disabled via build config 00:02:59.587 test-compress-perf: explicitly disabled via build config 00:02:59.587 test-crypto-perf: explicitly disabled via build config 00:02:59.587 test-dma-perf: explicitly disabled via build config 00:02:59.587 test-eventdev: explicitly disabled via build config 00:02:59.587 test-fib: explicitly disabled via build config 00:02:59.587 test-flow-perf: explicitly disabled via build config 00:02:59.587 test-gpudev: explicitly disabled via build config 00:02:59.587 test-mldev: explicitly disabled via build config 00:02:59.587 test-pipeline: explicitly disabled via build config 00:02:59.587 test-pmd: explicitly disabled via build config 00:02:59.587 test-regex: explicitly disabled via build config 00:02:59.587 test-sad: explicitly disabled via build config 00:02:59.587 test-security-perf: explicitly disabled via build config 00:02:59.587 00:02:59.587 libs: 00:02:59.587 argparse: explicitly disabled via build config 00:02:59.587 metrics: explicitly disabled via build config 00:02:59.587 acl: explicitly disabled via build config 00:02:59.587 bbdev: explicitly disabled via build config 00:02:59.587 bitratestats: explicitly disabled via build config 00:02:59.587 bpf: explicitly disabled via build config 00:02:59.587 cfgfile: explicitly disabled via build config 00:02:59.587 distributor: explicitly disabled via build config 00:02:59.587 efd: explicitly disabled via build config 00:02:59.587 eventdev: explicitly disabled via build config 00:02:59.587 dispatcher: explicitly disabled via build config 00:02:59.587 gpudev: explicitly disabled via build config 00:02:59.587 gro: explicitly disabled via build config 00:02:59.587 gso: explicitly disabled via build config 00:02:59.587 ip_frag: explicitly disabled via build config 00:02:59.587 jobstats: explicitly disabled via build config 00:02:59.587 latencystats: explicitly disabled via build config 00:02:59.587 lpm: explicitly disabled via build config 00:02:59.587 member: explicitly disabled via build config 00:02:59.587 pcapng: explicitly disabled via build config 00:02:59.587 rawdev: explicitly disabled via build config 00:02:59.587 regexdev: explicitly disabled via build config 00:02:59.587 mldev: explicitly disabled via build config 00:02:59.587 rib: explicitly disabled via build config 00:02:59.587 sched: explicitly disabled via build config 00:02:59.587 stack: explicitly disabled via build config 00:02:59.587 ipsec: explicitly disabled via build config 00:02:59.587 pdcp: explicitly disabled via build config 00:02:59.587 fib: explicitly disabled via build config 00:02:59.587 port: explicitly disabled via build config 00:02:59.587 pdump: explicitly disabled via build config 00:02:59.587 table: explicitly disabled via build config 00:02:59.587 pipeline: explicitly disabled via build config 00:02:59.587 graph: explicitly disabled via build config 00:02:59.587 node: explicitly disabled via build config 00:02:59.587 00:02:59.587 drivers: 00:02:59.587 common/cpt: not in enabled drivers build config 00:02:59.587 common/dpaax: not in enabled drivers build config 00:02:59.587 common/iavf: not in enabled drivers build config 00:02:59.587 common/idpf: not in enabled drivers build config 00:02:59.587 common/ionic: not in enabled drivers build config 00:02:59.587 common/mvep: not in enabled drivers build config 00:02:59.587 common/octeontx: not in enabled drivers build config 00:02:59.587 bus/auxiliary: not in enabled drivers build config 00:02:59.587 bus/cdx: not in enabled drivers build config 00:02:59.587 bus/dpaa: not in enabled drivers build config 00:02:59.587 bus/fslmc: not in enabled drivers build config 00:02:59.587 bus/ifpga: not in enabled drivers build config 00:02:59.587 bus/platform: not in enabled drivers build config 00:02:59.587 bus/uacce: not in enabled drivers build config 00:02:59.587 bus/vmbus: not in enabled drivers build config 00:02:59.587 common/cnxk: not in enabled drivers build config 00:02:59.587 common/mlx5: not in enabled drivers build config 00:02:59.587 common/nfp: not in enabled drivers build config 00:02:59.587 common/nitrox: not in enabled drivers build config 00:02:59.587 common/qat: not in enabled drivers build config 00:02:59.587 common/sfc_efx: not in enabled drivers build config 00:02:59.587 mempool/bucket: not in enabled drivers build config 00:02:59.587 mempool/cnxk: not in enabled drivers build config 00:02:59.587 mempool/dpaa: not in enabled drivers build config 00:02:59.587 mempool/dpaa2: not in enabled drivers build config 00:02:59.587 mempool/octeontx: not in enabled drivers build config 00:02:59.587 mempool/stack: not in enabled drivers build config 00:02:59.587 dma/cnxk: not in enabled drivers build config 00:02:59.587 dma/dpaa: not in enabled drivers build config 00:02:59.587 dma/dpaa2: not in enabled drivers build config 00:02:59.587 dma/hisilicon: not in enabled drivers build config 00:02:59.587 dma/idxd: not in enabled drivers build config 00:02:59.587 dma/ioat: not in enabled drivers build config 00:02:59.587 dma/skeleton: not in enabled drivers build config 00:02:59.587 net/af_packet: not in enabled drivers build config 00:02:59.587 net/af_xdp: not in enabled drivers build config 00:02:59.587 net/ark: not in enabled drivers build config 00:02:59.587 net/atlantic: not in enabled drivers build config 00:02:59.587 net/avp: not in enabled drivers build config 00:02:59.587 net/axgbe: not in enabled drivers build config 00:02:59.587 net/bnx2x: not in enabled drivers build config 00:02:59.587 net/bnxt: not in enabled drivers build config 00:02:59.587 net/bonding: not in enabled drivers build config 00:02:59.587 net/cnxk: not in enabled drivers build config 00:02:59.587 net/cpfl: not in enabled drivers build config 00:02:59.587 net/cxgbe: not in enabled drivers build config 00:02:59.587 net/dpaa: not in enabled drivers build config 00:02:59.587 net/dpaa2: not in enabled drivers build config 00:02:59.587 net/e1000: not in enabled drivers build config 00:02:59.587 net/ena: not in enabled drivers build config 00:02:59.588 net/enetc: not in enabled drivers build config 00:02:59.588 net/enetfec: not in enabled drivers build config 00:02:59.588 net/enic: not in enabled drivers build config 00:02:59.588 net/failsafe: not in enabled drivers build config 00:02:59.588 net/fm10k: not in enabled drivers build config 00:02:59.588 net/gve: not in enabled drivers build config 00:02:59.588 net/hinic: not in enabled drivers build config 00:02:59.588 net/hns3: not in enabled drivers build config 00:02:59.588 net/i40e: not in enabled drivers build config 00:02:59.588 net/iavf: not in enabled drivers build config 00:02:59.588 net/ice: not in enabled drivers build config 00:02:59.588 net/idpf: not in enabled drivers build config 00:02:59.588 net/igc: not in enabled drivers build config 00:02:59.588 net/ionic: not in enabled drivers build config 00:02:59.588 net/ipn3ke: not in enabled drivers build config 00:02:59.588 net/ixgbe: not in enabled drivers build config 00:02:59.588 net/mana: not in enabled drivers build config 00:02:59.588 net/memif: not in enabled drivers build config 00:02:59.588 net/mlx4: not in enabled drivers build config 00:02:59.588 net/mlx5: not in enabled drivers build config 00:02:59.588 net/mvneta: not in enabled drivers build config 00:02:59.588 net/mvpp2: not in enabled drivers build config 00:02:59.588 net/netvsc: not in enabled drivers build config 00:02:59.588 net/nfb: not in enabled drivers build config 00:02:59.588 net/nfp: not in enabled drivers build config 00:02:59.588 net/ngbe: not in enabled drivers build config 00:02:59.588 net/null: not in enabled drivers build config 00:02:59.588 net/octeontx: not in enabled drivers build config 00:02:59.588 net/octeon_ep: not in enabled drivers build config 00:02:59.588 net/pcap: not in enabled drivers build config 00:02:59.588 net/pfe: not in enabled drivers build config 00:02:59.588 net/qede: not in enabled drivers build config 00:02:59.588 net/ring: not in enabled drivers build config 00:02:59.588 net/sfc: not in enabled drivers build config 00:02:59.588 net/softnic: not in enabled drivers build config 00:02:59.588 net/tap: not in enabled drivers build config 00:02:59.588 net/thunderx: not in enabled drivers build config 00:02:59.588 net/txgbe: not in enabled drivers build config 00:02:59.588 net/vdev_netvsc: not in enabled drivers build config 00:02:59.588 net/vhost: not in enabled drivers build config 00:02:59.588 net/virtio: not in enabled drivers build config 00:02:59.588 net/vmxnet3: not in enabled drivers build config 00:02:59.588 raw/*: missing internal dependency, "rawdev" 00:02:59.588 crypto/armv8: not in enabled drivers build config 00:02:59.588 crypto/bcmfs: not in enabled drivers build config 00:02:59.588 crypto/caam_jr: not in enabled drivers build config 00:02:59.588 crypto/ccp: not in enabled drivers build config 00:02:59.588 crypto/cnxk: not in enabled drivers build config 00:02:59.588 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.588 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.588 crypto/ipsec_mb: not in enabled drivers build config 00:02:59.588 crypto/mlx5: not in enabled drivers build config 00:02:59.588 crypto/mvsam: not in enabled drivers build config 00:02:59.588 crypto/nitrox: not in enabled drivers build config 00:02:59.588 crypto/null: not in enabled drivers build config 00:02:59.588 crypto/octeontx: not in enabled drivers build config 00:02:59.588 crypto/openssl: not in enabled drivers build config 00:02:59.588 crypto/scheduler: not in enabled drivers build config 00:02:59.588 crypto/uadk: not in enabled drivers build config 00:02:59.588 crypto/virtio: not in enabled drivers build config 00:02:59.588 compress/isal: not in enabled drivers build config 00:02:59.588 compress/mlx5: not in enabled drivers build config 00:02:59.588 compress/nitrox: not in enabled drivers build config 00:02:59.588 compress/octeontx: not in enabled drivers build config 00:02:59.588 compress/zlib: not in enabled drivers build config 00:02:59.588 regex/*: missing internal dependency, "regexdev" 00:02:59.588 ml/*: missing internal dependency, "mldev" 00:02:59.588 vdpa/ifc: not in enabled drivers build config 00:02:59.588 vdpa/mlx5: not in enabled drivers build config 00:02:59.588 vdpa/nfp: not in enabled drivers build config 00:02:59.588 vdpa/sfc: not in enabled drivers build config 00:02:59.588 event/*: missing internal dependency, "eventdev" 00:02:59.588 baseband/*: missing internal dependency, "bbdev" 00:02:59.588 gpu/*: missing internal dependency, "gpudev" 00:02:59.588 00:02:59.588 00:02:59.588 Build targets in project: 85 00:02:59.588 00:02:59.588 DPDK 24.03.0 00:02:59.588 00:02:59.588 User defined options 00:02:59.588 buildtype : debug 00:02:59.588 default_library : shared 00:02:59.588 libdir : lib 00:02:59.588 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:59.588 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:59.588 c_link_args : 00:02:59.588 cpu_instruction_set: native 00:02:59.588 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:59.588 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:59.588 enable_docs : false 00:02:59.588 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:59.588 enable_kmods : false 00:02:59.588 max_lcores : 128 00:02:59.588 tests : false 00:02:59.588 00:02:59.588 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.588 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:59.588 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:59.588 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.588 [3/268] Linking static target lib/librte_kvargs.a 00:02:59.588 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:59.588 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.588 [6/268] Linking static target lib/librte_log.a 00:02:59.847 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.847 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:59.847 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:00.105 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:00.105 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:00.105 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:00.105 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.105 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:00.365 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.365 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.365 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.365 [18/268] Linking static target lib/librte_telemetry.a 00:03:00.365 [19/268] Linking target lib/librte_log.so.24.1 00:03:00.365 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.624 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:00.624 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:00.884 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.884 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:01.143 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:01.143 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:01.143 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:01.143 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:01.143 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:01.143 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:01.402 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.402 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.402 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:01.402 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:01.661 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:01.661 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:01.661 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:01.921 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:01.921 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:01.921 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:02.178 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:02.178 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:02.178 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:02.178 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:02.178 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:02.437 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:02.437 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:02.437 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:02.695 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:02.695 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:02.695 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:02.954 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:02.954 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:03.213 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:03.213 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:03.213 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:03.213 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:03.213 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:03.213 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:03.493 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:03.493 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:03.493 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:03.798 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:03.798 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.058 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.058 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.058 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.317 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.317 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.317 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.317 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.576 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:04.576 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.576 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:04.576 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:04.834 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:05.093 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:05.093 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:05.093 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:05.352 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:05.352 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:05.352 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:05.352 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:05.612 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:05.612 [85/268] Linking static target lib/librte_eal.a 00:03:05.612 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:05.612 [87/268] Linking static target lib/librte_ring.a 00:03:05.870 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:06.130 [89/268] Linking static target lib/librte_rcu.a 00:03:06.130 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:06.130 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:06.130 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:06.130 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:06.130 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.390 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:06.390 [96/268] Linking static target lib/librte_mempool.a 00:03:06.390 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:06.650 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:06.650 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.909 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:06.909 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:06.909 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:06.909 [103/268] Linking static target lib/librte_mbuf.a 00:03:06.909 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:07.168 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:07.168 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:07.168 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:07.737 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:07.737 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:07.737 [110/268] Linking static target lib/librte_meter.a 00:03:07.737 [111/268] Linking static target lib/librte_net.a 00:03:07.737 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.737 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:07.996 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:07.996 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.996 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.255 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.255 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:08.513 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:08.772 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:09.031 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:09.031 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:09.031 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:09.290 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:09.290 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:09.290 [126/268] Linking static target lib/librte_pci.a 00:03:09.548 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:09.548 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:09.548 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:09.548 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:09.548 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:09.548 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:09.807 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:09.807 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:09.807 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:09.807 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:09.807 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.807 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:09.807 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:09.807 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:09.807 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:09.807 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:09.807 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:09.807 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:09.807 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:10.067 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:10.067 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:10.067 [148/268] Linking static target lib/librte_ethdev.a 00:03:10.326 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:10.326 [150/268] Linking static target lib/librte_cmdline.a 00:03:10.326 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:10.585 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:10.585 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:10.585 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:10.585 [155/268] Linking static target lib/librte_timer.a 00:03:10.845 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:10.845 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:10.845 [158/268] Linking static target lib/librte_hash.a 00:03:10.845 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:11.104 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:11.104 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:11.104 [162/268] Linking static target lib/librte_compressdev.a 00:03:11.104 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:11.363 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.363 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:11.621 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:11.621 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:11.879 [168/268] Linking static target lib/librte_dmadev.a 00:03:11.879 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:11.879 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:11.879 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:11.879 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:12.136 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.136 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:12.136 [175/268] Linking static target lib/librte_cryptodev.a 00:03:12.136 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.136 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.394 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:12.652 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:12.652 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:12.652 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:12.652 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.910 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:12.910 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:12.910 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:12.910 [186/268] Linking static target lib/librte_power.a 00:03:13.168 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:13.168 [188/268] Linking static target lib/librte_reorder.a 00:03:13.428 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:13.428 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:13.687 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:13.687 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:13.687 [193/268] Linking static target lib/librte_security.a 00:03:13.946 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:13.946 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.205 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.464 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.464 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:14.464 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:14.464 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:14.723 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:14.723 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.981 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:14.981 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:15.240 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:15.240 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:15.240 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:15.240 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:15.499 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:15.499 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:15.499 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:15.499 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:15.757 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:15.757 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:15.757 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:15.757 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:15.757 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:15.757 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:15.757 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:15.757 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:15.757 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:15.757 [222/268] Linking static target drivers/librte_bus_vdev.a 00:03:16.015 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:16.015 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:16.015 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:16.015 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:16.015 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.273 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.838 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:17.097 [230/268] Linking static target lib/librte_vhost.a 00:03:17.663 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.663 [232/268] Linking target lib/librte_eal.so.24.1 00:03:17.663 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:17.921 [234/268] Linking target lib/librte_dmadev.so.24.1 00:03:17.921 [235/268] Linking target lib/librte_timer.so.24.1 00:03:17.921 [236/268] Linking target lib/librte_ring.so.24.1 00:03:17.921 [237/268] Linking target lib/librte_meter.so.24.1 00:03:17.921 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:17.921 [239/268] Linking target lib/librte_pci.so.24.1 00:03:17.921 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:17.921 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:17.921 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:17.921 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:17.921 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:17.921 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:17.921 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:17.921 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:18.180 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:18.180 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:18.180 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:18.180 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:18.439 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:18.439 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.439 [254/268] Linking target lib/librte_net.so.24.1 00:03:18.439 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:18.439 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:18.439 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:18.439 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.698 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:18.698 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:18.698 [261/268] Linking target lib/librte_hash.so.24.1 00:03:18.698 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:18.698 [263/268] Linking target lib/librte_security.so.24.1 00:03:18.698 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:18.698 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:18.956 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:18.957 [267/268] Linking target lib/librte_power.so.24.1 00:03:18.957 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:18.957 INFO: autodetecting backend as ninja 00:03:18.957 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:20.333 CC lib/log/log.o 00:03:20.333 CC lib/log/log_deprecated.o 00:03:20.333 CC lib/log/log_flags.o 00:03:20.333 CC lib/ut_mock/mock.o 00:03:20.333 CC lib/ut/ut.o 00:03:20.333 LIB libspdk_ut_mock.a 00:03:20.333 LIB libspdk_log.a 00:03:20.333 LIB libspdk_ut.a 00:03:20.333 SO libspdk_ut_mock.so.6.0 00:03:20.333 SO libspdk_log.so.7.0 00:03:20.333 SO libspdk_ut.so.2.0 00:03:20.333 SYMLINK libspdk_ut_mock.so 00:03:20.333 SYMLINK libspdk_ut.so 00:03:20.333 SYMLINK libspdk_log.so 00:03:20.592 CXX lib/trace_parser/trace.o 00:03:20.592 CC lib/ioat/ioat.o 00:03:20.592 CC lib/util/base64.o 00:03:20.592 CC lib/util/bit_array.o 00:03:20.592 CC lib/util/cpuset.o 00:03:20.592 CC lib/dma/dma.o 00:03:20.592 CC lib/util/crc16.o 00:03:20.592 CC lib/util/crc32c.o 00:03:20.592 CC lib/util/crc32.o 00:03:20.592 CC lib/vfio_user/host/vfio_user_pci.o 00:03:20.851 CC lib/util/crc32_ieee.o 00:03:20.851 CC lib/util/crc64.o 00:03:20.851 CC lib/util/dif.o 00:03:20.851 CC lib/vfio_user/host/vfio_user.o 00:03:20.851 CC lib/util/fd.o 00:03:20.851 LIB libspdk_dma.a 00:03:20.851 CC lib/util/fd_group.o 00:03:20.851 SO libspdk_dma.so.4.0 00:03:20.851 LIB libspdk_ioat.a 00:03:20.851 CC lib/util/file.o 00:03:21.109 SO libspdk_ioat.so.7.0 00:03:21.109 CC lib/util/hexlify.o 00:03:21.109 CC lib/util/iov.o 00:03:21.109 SYMLINK libspdk_ioat.so 00:03:21.109 SYMLINK libspdk_dma.so 00:03:21.109 CC lib/util/net.o 00:03:21.109 CC lib/util/math.o 00:03:21.109 CC lib/util/pipe.o 00:03:21.109 LIB libspdk_vfio_user.a 00:03:21.109 SO libspdk_vfio_user.so.5.0 00:03:21.109 CC lib/util/strerror_tls.o 00:03:21.109 CC lib/util/string.o 00:03:21.109 SYMLINK libspdk_vfio_user.so 00:03:21.109 CC lib/util/uuid.o 00:03:21.109 CC lib/util/xor.o 00:03:21.109 CC lib/util/zipf.o 00:03:21.368 LIB libspdk_util.a 00:03:21.368 SO libspdk_util.so.10.0 00:03:21.626 LIB libspdk_trace_parser.a 00:03:21.627 SYMLINK libspdk_util.so 00:03:21.627 SO libspdk_trace_parser.so.5.0 00:03:21.886 SYMLINK libspdk_trace_parser.so 00:03:21.886 CC lib/json/json_parse.o 00:03:21.886 CC lib/json/json_util.o 00:03:21.886 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:21.886 CC lib/json/json_write.o 00:03:21.886 CC lib/rdma_provider/common.o 00:03:21.886 CC lib/idxd/idxd.o 00:03:21.886 CC lib/rdma_utils/rdma_utils.o 00:03:21.886 CC lib/vmd/vmd.o 00:03:21.886 CC lib/env_dpdk/env.o 00:03:21.886 CC lib/conf/conf.o 00:03:22.145 CC lib/vmd/led.o 00:03:22.145 LIB libspdk_rdma_provider.a 00:03:22.145 LIB libspdk_conf.a 00:03:22.145 SO libspdk_rdma_provider.so.6.0 00:03:22.145 CC lib/idxd/idxd_user.o 00:03:22.145 SO libspdk_conf.so.6.0 00:03:22.145 CC lib/env_dpdk/memory.o 00:03:22.145 LIB libspdk_json.a 00:03:22.145 LIB libspdk_rdma_utils.a 00:03:22.145 SYMLINK libspdk_rdma_provider.so 00:03:22.145 CC lib/idxd/idxd_kernel.o 00:03:22.145 SYMLINK libspdk_conf.so 00:03:22.145 SO libspdk_json.so.6.0 00:03:22.145 SO libspdk_rdma_utils.so.1.0 00:03:22.145 CC lib/env_dpdk/pci.o 00:03:22.145 CC lib/env_dpdk/init.o 00:03:22.145 SYMLINK libspdk_rdma_utils.so 00:03:22.145 CC lib/env_dpdk/threads.o 00:03:22.145 SYMLINK libspdk_json.so 00:03:22.145 CC lib/env_dpdk/pci_ioat.o 00:03:22.405 CC lib/env_dpdk/pci_virtio.o 00:03:22.405 CC lib/env_dpdk/pci_vmd.o 00:03:22.405 CC lib/env_dpdk/pci_idxd.o 00:03:22.405 LIB libspdk_idxd.a 00:03:22.405 CC lib/env_dpdk/pci_event.o 00:03:22.405 SO libspdk_idxd.so.12.0 00:03:22.405 LIB libspdk_vmd.a 00:03:22.405 CC lib/env_dpdk/sigbus_handler.o 00:03:22.405 SYMLINK libspdk_idxd.so 00:03:22.405 CC lib/env_dpdk/pci_dpdk.o 00:03:22.405 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:22.405 SO libspdk_vmd.so.6.0 00:03:22.664 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:22.664 SYMLINK libspdk_vmd.so 00:03:22.664 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.664 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.664 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.664 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.923 LIB libspdk_jsonrpc.a 00:03:22.923 SO libspdk_jsonrpc.so.6.0 00:03:22.923 SYMLINK libspdk_jsonrpc.so 00:03:23.182 LIB libspdk_env_dpdk.a 00:03:23.182 CC lib/rpc/rpc.o 00:03:23.441 SO libspdk_env_dpdk.so.15.0 00:03:23.441 LIB libspdk_rpc.a 00:03:23.441 SYMLINK libspdk_env_dpdk.so 00:03:23.441 SO libspdk_rpc.so.6.0 00:03:23.700 SYMLINK libspdk_rpc.so 00:03:23.700 CC lib/keyring/keyring_rpc.o 00:03:23.700 CC lib/keyring/keyring.o 00:03:23.958 CC lib/notify/notify.o 00:03:23.958 CC lib/trace/trace.o 00:03:23.958 CC lib/trace/trace_flags.o 00:03:23.958 CC lib/notify/notify_rpc.o 00:03:23.958 CC lib/trace/trace_rpc.o 00:03:23.958 LIB libspdk_notify.a 00:03:23.959 SO libspdk_notify.so.6.0 00:03:23.959 LIB libspdk_trace.a 00:03:24.217 LIB libspdk_keyring.a 00:03:24.217 SYMLINK libspdk_notify.so 00:03:24.217 SO libspdk_trace.so.10.0 00:03:24.217 SO libspdk_keyring.so.1.0 00:03:24.217 SYMLINK libspdk_trace.so 00:03:24.217 SYMLINK libspdk_keyring.so 00:03:24.487 CC lib/sock/sock.o 00:03:24.487 CC lib/sock/sock_rpc.o 00:03:24.487 CC lib/thread/iobuf.o 00:03:24.487 CC lib/thread/thread.o 00:03:24.761 LIB libspdk_sock.a 00:03:25.018 SO libspdk_sock.so.10.0 00:03:25.018 SYMLINK libspdk_sock.so 00:03:25.277 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:25.277 CC lib/nvme/nvme_ctrlr.o 00:03:25.277 CC lib/nvme/nvme_fabric.o 00:03:25.277 CC lib/nvme/nvme_ns_cmd.o 00:03:25.277 CC lib/nvme/nvme_ns.o 00:03:25.277 CC lib/nvme/nvme_pcie_common.o 00:03:25.277 CC lib/nvme/nvme_pcie.o 00:03:25.277 CC lib/nvme/nvme_qpair.o 00:03:25.277 CC lib/nvme/nvme.o 00:03:25.848 CC lib/nvme/nvme_quirks.o 00:03:26.107 CC lib/nvme/nvme_transport.o 00:03:26.107 CC lib/nvme/nvme_discovery.o 00:03:26.107 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:26.107 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:26.107 CC lib/nvme/nvme_tcp.o 00:03:26.107 LIB libspdk_thread.a 00:03:26.366 CC lib/nvme/nvme_opal.o 00:03:26.366 SO libspdk_thread.so.10.1 00:03:26.366 SYMLINK libspdk_thread.so 00:03:26.366 CC lib/nvme/nvme_io_msg.o 00:03:26.366 CC lib/nvme/nvme_poll_group.o 00:03:26.625 CC lib/nvme/nvme_zns.o 00:03:26.625 CC lib/nvme/nvme_stubs.o 00:03:26.625 CC lib/nvme/nvme_auth.o 00:03:26.884 CC lib/nvme/nvme_cuse.o 00:03:26.884 CC lib/accel/accel.o 00:03:26.884 CC lib/nvme/nvme_rdma.o 00:03:27.143 CC lib/blob/blobstore.o 00:03:27.143 CC lib/init/json_config.o 00:03:27.143 CC lib/init/subsystem.o 00:03:27.402 CC lib/init/subsystem_rpc.o 00:03:27.402 CC lib/init/rpc.o 00:03:27.402 CC lib/accel/accel_rpc.o 00:03:27.402 CC lib/accel/accel_sw.o 00:03:27.402 LIB libspdk_init.a 00:03:27.660 SO libspdk_init.so.5.0 00:03:27.660 CC lib/blob/request.o 00:03:27.660 SYMLINK libspdk_init.so 00:03:27.660 CC lib/blob/zeroes.o 00:03:27.660 CC lib/blob/blob_bs_dev.o 00:03:27.660 CC lib/virtio/virtio.o 00:03:27.660 CC lib/virtio/virtio_vhost_user.o 00:03:27.660 CC lib/virtio/virtio_vfio_user.o 00:03:27.918 CC lib/virtio/virtio_pci.o 00:03:27.918 LIB libspdk_accel.a 00:03:27.918 SO libspdk_accel.so.16.0 00:03:27.918 CC lib/event/app.o 00:03:27.918 CC lib/event/reactor.o 00:03:27.918 CC lib/event/log_rpc.o 00:03:27.918 SYMLINK libspdk_accel.so 00:03:27.918 CC lib/event/app_rpc.o 00:03:27.918 CC lib/event/scheduler_static.o 00:03:28.177 LIB libspdk_virtio.a 00:03:28.177 CC lib/bdev/bdev_rpc.o 00:03:28.177 CC lib/bdev/bdev.o 00:03:28.177 CC lib/bdev/bdev_zone.o 00:03:28.177 CC lib/bdev/part.o 00:03:28.177 SO libspdk_virtio.so.7.0 00:03:28.177 CC lib/bdev/scsi_nvme.o 00:03:28.177 SYMLINK libspdk_virtio.so 00:03:28.436 LIB libspdk_event.a 00:03:28.436 SO libspdk_event.so.14.0 00:03:28.436 LIB libspdk_nvme.a 00:03:28.436 SYMLINK libspdk_event.so 00:03:28.694 SO libspdk_nvme.so.13.1 00:03:28.954 SYMLINK libspdk_nvme.so 00:03:30.331 LIB libspdk_blob.a 00:03:30.331 SO libspdk_blob.so.11.0 00:03:30.331 SYMLINK libspdk_blob.so 00:03:30.590 CC lib/lvol/lvol.o 00:03:30.590 CC lib/blobfs/blobfs.o 00:03:30.590 CC lib/blobfs/tree.o 00:03:30.590 LIB libspdk_bdev.a 00:03:30.849 SO libspdk_bdev.so.16.0 00:03:30.849 SYMLINK libspdk_bdev.so 00:03:31.120 CC lib/ublk/ublk.o 00:03:31.120 CC lib/ublk/ublk_rpc.o 00:03:31.120 CC lib/nbd/nbd_rpc.o 00:03:31.120 CC lib/nbd/nbd.o 00:03:31.120 CC lib/nvmf/ctrlr.o 00:03:31.120 CC lib/nvmf/ctrlr_discovery.o 00:03:31.120 CC lib/scsi/dev.o 00:03:31.120 CC lib/ftl/ftl_core.o 00:03:31.379 CC lib/ftl/ftl_init.o 00:03:31.379 LIB libspdk_blobfs.a 00:03:31.379 CC lib/ftl/ftl_layout.o 00:03:31.379 SO libspdk_blobfs.so.10.0 00:03:31.379 SYMLINK libspdk_blobfs.so 00:03:31.379 CC lib/ftl/ftl_debug.o 00:03:31.379 CC lib/scsi/lun.o 00:03:31.637 LIB libspdk_lvol.a 00:03:31.637 SO libspdk_lvol.so.10.0 00:03:31.637 CC lib/ftl/ftl_io.o 00:03:31.637 CC lib/nvmf/ctrlr_bdev.o 00:03:31.637 LIB libspdk_nbd.a 00:03:31.637 SYMLINK libspdk_lvol.so 00:03:31.637 CC lib/nvmf/subsystem.o 00:03:31.637 SO libspdk_nbd.so.7.0 00:03:31.637 CC lib/ftl/ftl_sb.o 00:03:31.637 SYMLINK libspdk_nbd.so 00:03:31.637 CC lib/ftl/ftl_l2p.o 00:03:31.637 CC lib/scsi/port.o 00:03:31.637 CC lib/scsi/scsi.o 00:03:31.896 CC lib/nvmf/nvmf.o 00:03:31.896 LIB libspdk_ublk.a 00:03:31.896 CC lib/nvmf/nvmf_rpc.o 00:03:31.896 SO libspdk_ublk.so.3.0 00:03:31.896 CC lib/scsi/scsi_bdev.o 00:03:31.896 CC lib/ftl/ftl_l2p_flat.o 00:03:31.896 CC lib/nvmf/transport.o 00:03:31.896 SYMLINK libspdk_ublk.so 00:03:31.896 CC lib/ftl/ftl_nv_cache.o 00:03:31.896 CC lib/ftl/ftl_band.o 00:03:32.153 CC lib/nvmf/tcp.o 00:03:32.153 CC lib/nvmf/stubs.o 00:03:32.411 CC lib/scsi/scsi_pr.o 00:03:32.411 CC lib/ftl/ftl_band_ops.o 00:03:32.669 CC lib/nvmf/mdns_server.o 00:03:32.669 CC lib/nvmf/rdma.o 00:03:32.669 CC lib/ftl/ftl_writer.o 00:03:32.669 CC lib/scsi/scsi_rpc.o 00:03:32.669 CC lib/nvmf/auth.o 00:03:32.669 CC lib/ftl/ftl_rq.o 00:03:32.927 CC lib/scsi/task.o 00:03:32.927 CC lib/ftl/ftl_reloc.o 00:03:32.927 CC lib/ftl/ftl_l2p_cache.o 00:03:32.927 CC lib/ftl/ftl_p2l.o 00:03:32.927 CC lib/ftl/mngt/ftl_mngt.o 00:03:32.927 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:32.927 LIB libspdk_scsi.a 00:03:32.927 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:33.186 SO libspdk_scsi.so.9.0 00:03:33.186 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:33.186 SYMLINK libspdk_scsi.so 00:03:33.186 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:33.186 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:33.443 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:33.443 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:33.443 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:33.443 CC lib/iscsi/conn.o 00:03:33.443 CC lib/vhost/vhost.o 00:03:33.443 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:33.443 CC lib/vhost/vhost_rpc.o 00:03:33.443 CC lib/vhost/vhost_scsi.o 00:03:33.701 CC lib/iscsi/init_grp.o 00:03:33.701 CC lib/iscsi/iscsi.o 00:03:33.701 CC lib/iscsi/md5.o 00:03:33.701 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:33.701 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:33.959 CC lib/iscsi/param.o 00:03:33.959 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:33.959 CC lib/ftl/utils/ftl_conf.o 00:03:34.217 CC lib/vhost/vhost_blk.o 00:03:34.217 CC lib/vhost/rte_vhost_user.o 00:03:34.217 CC lib/iscsi/portal_grp.o 00:03:34.217 CC lib/ftl/utils/ftl_md.o 00:03:34.217 CC lib/ftl/utils/ftl_mempool.o 00:03:34.217 CC lib/ftl/utils/ftl_bitmap.o 00:03:34.217 CC lib/ftl/utils/ftl_property.o 00:03:34.476 CC lib/iscsi/tgt_node.o 00:03:34.476 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:34.476 CC lib/iscsi/iscsi_subsystem.o 00:03:34.476 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:34.476 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:34.733 CC lib/iscsi/iscsi_rpc.o 00:03:34.733 CC lib/iscsi/task.o 00:03:34.733 LIB libspdk_nvmf.a 00:03:34.733 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:34.733 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:34.733 SO libspdk_nvmf.so.19.0 00:03:34.733 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:34.992 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:34.992 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:34.992 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:34.992 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:34.992 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:34.992 SYMLINK libspdk_nvmf.so 00:03:34.992 CC lib/ftl/base/ftl_base_dev.o 00:03:34.992 CC lib/ftl/base/ftl_base_bdev.o 00:03:34.992 LIB libspdk_iscsi.a 00:03:34.992 CC lib/ftl/ftl_trace.o 00:03:35.252 SO libspdk_iscsi.so.8.0 00:03:35.252 LIB libspdk_vhost.a 00:03:35.252 SYMLINK libspdk_iscsi.so 00:03:35.252 LIB libspdk_ftl.a 00:03:35.252 SO libspdk_vhost.so.8.0 00:03:35.510 SYMLINK libspdk_vhost.so 00:03:35.510 SO libspdk_ftl.so.9.0 00:03:36.078 SYMLINK libspdk_ftl.so 00:03:36.336 CC module/env_dpdk/env_dpdk_rpc.o 00:03:36.336 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:36.336 CC module/accel/dsa/accel_dsa.o 00:03:36.336 CC module/accel/error/accel_error.o 00:03:36.336 CC module/accel/ioat/accel_ioat.o 00:03:36.336 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:36.336 CC module/keyring/file/keyring.o 00:03:36.336 CC module/accel/iaa/accel_iaa.o 00:03:36.336 CC module/sock/posix/posix.o 00:03:36.336 CC module/blob/bdev/blob_bdev.o 00:03:36.594 LIB libspdk_env_dpdk_rpc.a 00:03:36.594 SO libspdk_env_dpdk_rpc.so.6.0 00:03:36.594 SYMLINK libspdk_env_dpdk_rpc.so 00:03:36.594 CC module/accel/ioat/accel_ioat_rpc.o 00:03:36.594 CC module/keyring/file/keyring_rpc.o 00:03:36.594 LIB libspdk_scheduler_dpdk_governor.a 00:03:36.594 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:36.594 LIB libspdk_scheduler_dynamic.a 00:03:36.594 CC module/accel/error/accel_error_rpc.o 00:03:36.594 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.594 SO libspdk_scheduler_dynamic.so.4.0 00:03:36.595 CC module/accel/dsa/accel_dsa_rpc.o 00:03:36.853 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:36.853 LIB libspdk_accel_ioat.a 00:03:36.853 LIB libspdk_keyring_file.a 00:03:36.853 LIB libspdk_blob_bdev.a 00:03:36.853 SYMLINK libspdk_scheduler_dynamic.so 00:03:36.853 SO libspdk_accel_ioat.so.6.0 00:03:36.853 SO libspdk_blob_bdev.so.11.0 00:03:36.853 SO libspdk_keyring_file.so.1.0 00:03:36.853 LIB libspdk_accel_error.a 00:03:36.853 LIB libspdk_accel_iaa.a 00:03:36.853 SO libspdk_accel_error.so.2.0 00:03:36.853 SO libspdk_accel_iaa.so.3.0 00:03:36.853 SYMLINK libspdk_accel_ioat.so 00:03:36.853 SYMLINK libspdk_blob_bdev.so 00:03:36.853 SYMLINK libspdk_keyring_file.so 00:03:36.853 LIB libspdk_accel_dsa.a 00:03:36.853 SYMLINK libspdk_accel_error.so 00:03:36.853 SYMLINK libspdk_accel_iaa.so 00:03:36.853 SO libspdk_accel_dsa.so.5.0 00:03:36.853 CC module/sock/uring/uring.o 00:03:36.853 CC module/scheduler/gscheduler/gscheduler.o 00:03:36.853 CC module/keyring/linux/keyring.o 00:03:36.853 CC module/keyring/linux/keyring_rpc.o 00:03:36.853 SYMLINK libspdk_accel_dsa.so 00:03:37.112 LIB libspdk_scheduler_gscheduler.a 00:03:37.112 LIB libspdk_keyring_linux.a 00:03:37.112 SO libspdk_scheduler_gscheduler.so.4.0 00:03:37.112 CC module/bdev/delay/vbdev_delay.o 00:03:37.112 CC module/bdev/error/vbdev_error.o 00:03:37.112 CC module/bdev/gpt/gpt.o 00:03:37.112 CC module/blobfs/bdev/blobfs_bdev.o 00:03:37.112 CC module/bdev/lvol/vbdev_lvol.o 00:03:37.112 SO libspdk_keyring_linux.so.1.0 00:03:37.112 LIB libspdk_sock_posix.a 00:03:37.112 SYMLINK libspdk_scheduler_gscheduler.so 00:03:37.112 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:37.112 SO libspdk_sock_posix.so.6.0 00:03:37.112 SYMLINK libspdk_keyring_linux.so 00:03:37.112 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:37.371 CC module/bdev/malloc/bdev_malloc.o 00:03:37.371 SYMLINK libspdk_sock_posix.so 00:03:37.371 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:37.371 CC module/bdev/gpt/vbdev_gpt.o 00:03:37.371 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:37.371 CC module/bdev/error/vbdev_error_rpc.o 00:03:37.371 LIB libspdk_blobfs_bdev.a 00:03:37.371 SO libspdk_blobfs_bdev.so.6.0 00:03:37.371 SYMLINK libspdk_blobfs_bdev.so 00:03:37.629 LIB libspdk_bdev_delay.a 00:03:37.629 SO libspdk_bdev_delay.so.6.0 00:03:37.629 LIB libspdk_bdev_error.a 00:03:37.629 LIB libspdk_sock_uring.a 00:03:37.629 LIB libspdk_bdev_gpt.a 00:03:37.629 SO libspdk_bdev_error.so.6.0 00:03:37.629 LIB libspdk_bdev_malloc.a 00:03:37.629 SO libspdk_bdev_gpt.so.6.0 00:03:37.629 SO libspdk_sock_uring.so.5.0 00:03:37.629 SYMLINK libspdk_bdev_delay.so 00:03:37.629 CC module/bdev/null/bdev_null.o 00:03:37.629 LIB libspdk_bdev_lvol.a 00:03:37.629 SO libspdk_bdev_malloc.so.6.0 00:03:37.629 CC module/bdev/passthru/vbdev_passthru.o 00:03:37.629 CC module/bdev/nvme/bdev_nvme.o 00:03:37.629 SYMLINK libspdk_bdev_error.so 00:03:37.629 SYMLINK libspdk_bdev_gpt.so 00:03:37.629 SO libspdk_bdev_lvol.so.6.0 00:03:37.629 SYMLINK libspdk_sock_uring.so 00:03:37.629 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:37.629 SYMLINK libspdk_bdev_malloc.so 00:03:37.629 SYMLINK libspdk_bdev_lvol.so 00:03:37.629 CC module/bdev/raid/bdev_raid.o 00:03:37.887 CC module/bdev/split/vbdev_split.o 00:03:37.887 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.887 CC module/bdev/aio/bdev_aio.o 00:03:37.887 CC module/bdev/uring/bdev_uring.o 00:03:37.887 CC module/bdev/null/bdev_null_rpc.o 00:03:37.887 CC module/bdev/ftl/bdev_ftl.o 00:03:37.887 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:38.146 CC module/bdev/split/vbdev_split_rpc.o 00:03:38.146 LIB libspdk_bdev_null.a 00:03:38.146 SO libspdk_bdev_null.so.6.0 00:03:38.146 LIB libspdk_bdev_passthru.a 00:03:38.146 SO libspdk_bdev_passthru.so.6.0 00:03:38.146 SYMLINK libspdk_bdev_null.so 00:03:38.146 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:38.146 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:38.146 LIB libspdk_bdev_split.a 00:03:38.146 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.146 SYMLINK libspdk_bdev_passthru.so 00:03:38.146 CC module/bdev/uring/bdev_uring_rpc.o 00:03:38.146 CC module/bdev/nvme/nvme_rpc.o 00:03:38.146 SO libspdk_bdev_split.so.6.0 00:03:38.405 CC module/bdev/nvme/bdev_mdns_client.o 00:03:38.405 SYMLINK libspdk_bdev_split.so 00:03:38.405 LIB libspdk_bdev_zone_block.a 00:03:38.405 CC module/bdev/iscsi/bdev_iscsi.o 00:03:38.405 LIB libspdk_bdev_aio.a 00:03:38.405 LIB libspdk_bdev_ftl.a 00:03:38.405 SO libspdk_bdev_zone_block.so.6.0 00:03:38.405 LIB libspdk_bdev_uring.a 00:03:38.405 SO libspdk_bdev_ftl.so.6.0 00:03:38.405 SO libspdk_bdev_aio.so.6.0 00:03:38.405 SO libspdk_bdev_uring.so.6.0 00:03:38.405 SYMLINK libspdk_bdev_zone_block.so 00:03:38.405 CC module/bdev/nvme/vbdev_opal.o 00:03:38.405 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:38.405 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:38.405 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:38.405 SYMLINK libspdk_bdev_ftl.so 00:03:38.405 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:38.663 SYMLINK libspdk_bdev_aio.so 00:03:38.663 SYMLINK libspdk_bdev_uring.so 00:03:38.663 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:38.663 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:38.663 CC module/bdev/raid/bdev_raid_rpc.o 00:03:38.663 CC module/bdev/raid/bdev_raid_sb.o 00:03:38.663 CC module/bdev/raid/raid0.o 00:03:38.663 CC module/bdev/raid/raid1.o 00:03:38.663 LIB libspdk_bdev_iscsi.a 00:03:38.663 CC module/bdev/raid/concat.o 00:03:38.922 SO libspdk_bdev_iscsi.so.6.0 00:03:38.922 SYMLINK libspdk_bdev_iscsi.so 00:03:38.922 LIB libspdk_bdev_raid.a 00:03:39.182 LIB libspdk_bdev_virtio.a 00:03:39.182 SO libspdk_bdev_raid.so.6.0 00:03:39.182 SO libspdk_bdev_virtio.so.6.0 00:03:39.182 SYMLINK libspdk_bdev_raid.so 00:03:39.182 SYMLINK libspdk_bdev_virtio.so 00:03:39.750 LIB libspdk_bdev_nvme.a 00:03:40.009 SO libspdk_bdev_nvme.so.7.0 00:03:40.009 SYMLINK libspdk_bdev_nvme.so 00:03:40.576 CC module/event/subsystems/vmd/vmd.o 00:03:40.576 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:40.576 CC module/event/subsystems/keyring/keyring.o 00:03:40.576 CC module/event/subsystems/iobuf/iobuf.o 00:03:40.576 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:40.576 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:40.576 CC module/event/subsystems/scheduler/scheduler.o 00:03:40.576 CC module/event/subsystems/sock/sock.o 00:03:40.576 LIB libspdk_event_keyring.a 00:03:40.576 LIB libspdk_event_vhost_blk.a 00:03:40.576 SO libspdk_event_keyring.so.1.0 00:03:40.576 LIB libspdk_event_vmd.a 00:03:40.576 LIB libspdk_event_scheduler.a 00:03:40.576 LIB libspdk_event_sock.a 00:03:40.576 LIB libspdk_event_iobuf.a 00:03:40.576 SO libspdk_event_vhost_blk.so.3.0 00:03:40.576 SO libspdk_event_vmd.so.6.0 00:03:40.576 SO libspdk_event_scheduler.so.4.0 00:03:40.576 SO libspdk_event_sock.so.5.0 00:03:40.576 SYMLINK libspdk_event_keyring.so 00:03:40.835 SO libspdk_event_iobuf.so.3.0 00:03:40.835 SYMLINK libspdk_event_vhost_blk.so 00:03:40.835 SYMLINK libspdk_event_vmd.so 00:03:40.835 SYMLINK libspdk_event_scheduler.so 00:03:40.835 SYMLINK libspdk_event_sock.so 00:03:40.835 SYMLINK libspdk_event_iobuf.so 00:03:41.094 CC module/event/subsystems/accel/accel.o 00:03:41.094 LIB libspdk_event_accel.a 00:03:41.353 SO libspdk_event_accel.so.6.0 00:03:41.353 SYMLINK libspdk_event_accel.so 00:03:41.611 CC module/event/subsystems/bdev/bdev.o 00:03:41.869 LIB libspdk_event_bdev.a 00:03:41.869 SO libspdk_event_bdev.so.6.0 00:03:41.869 SYMLINK libspdk_event_bdev.so 00:03:42.128 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.128 CC module/event/subsystems/nbd/nbd.o 00:03:42.128 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.128 CC module/event/subsystems/ublk/ublk.o 00:03:42.128 CC module/event/subsystems/scsi/scsi.o 00:03:42.387 LIB libspdk_event_nbd.a 00:03:42.387 LIB libspdk_event_ublk.a 00:03:42.387 LIB libspdk_event_scsi.a 00:03:42.387 SO libspdk_event_nbd.so.6.0 00:03:42.387 SO libspdk_event_ublk.so.3.0 00:03:42.387 SO libspdk_event_scsi.so.6.0 00:03:42.387 SYMLINK libspdk_event_nbd.so 00:03:42.387 LIB libspdk_event_nvmf.a 00:03:42.387 SYMLINK libspdk_event_ublk.so 00:03:42.387 SYMLINK libspdk_event_scsi.so 00:03:42.387 SO libspdk_event_nvmf.so.6.0 00:03:42.645 SYMLINK libspdk_event_nvmf.so 00:03:42.645 CC module/event/subsystems/iscsi/iscsi.o 00:03:42.645 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:42.904 LIB libspdk_event_vhost_scsi.a 00:03:42.904 SO libspdk_event_vhost_scsi.so.3.0 00:03:42.904 LIB libspdk_event_iscsi.a 00:03:42.904 SO libspdk_event_iscsi.so.6.0 00:03:42.904 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.163 SYMLINK libspdk_event_iscsi.so 00:03:43.163 SO libspdk.so.6.0 00:03:43.163 SYMLINK libspdk.so 00:03:43.429 CC app/spdk_lspci/spdk_lspci.o 00:03:43.429 CXX app/trace/trace.o 00:03:43.429 CC app/trace_record/trace_record.o 00:03:43.429 CC app/spdk_nvme_perf/perf.o 00:03:43.429 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.429 CC app/nvmf_tgt/nvmf_main.o 00:03:43.687 CC app/spdk_tgt/spdk_tgt.o 00:03:43.687 CC examples/ioat/perf/perf.o 00:03:43.687 CC examples/util/zipf/zipf.o 00:03:43.687 CC test/thread/poller_perf/poller_perf.o 00:03:43.687 LINK spdk_lspci 00:03:43.687 LINK zipf 00:03:43.687 LINK poller_perf 00:03:43.687 LINK spdk_trace_record 00:03:43.687 LINK nvmf_tgt 00:03:43.945 LINK spdk_tgt 00:03:43.945 LINK iscsi_tgt 00:03:43.945 LINK ioat_perf 00:03:43.945 CC app/spdk_nvme_identify/identify.o 00:03:43.945 LINK spdk_trace 00:03:44.203 CC app/spdk_top/spdk_top.o 00:03:44.203 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.203 CC examples/ioat/verify/verify.o 00:03:44.203 CC app/spdk_dd/spdk_dd.o 00:03:44.203 CC test/dma/test_dma/test_dma.o 00:03:44.462 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.462 LINK spdk_nvme_discover 00:03:44.462 CC app/fio/nvme/fio_plugin.o 00:03:44.462 LINK verify 00:03:44.462 LINK spdk_nvme_perf 00:03:44.462 CC examples/thread/thread/thread_ex.o 00:03:44.462 LINK interrupt_tgt 00:03:44.720 LINK test_dma 00:03:44.720 LINK spdk_dd 00:03:44.720 LINK thread 00:03:44.720 CC examples/sock/hello_world/hello_sock.o 00:03:44.720 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.720 CC app/fio/bdev/fio_plugin.o 00:03:44.978 CC examples/vmd/led/led.o 00:03:44.978 LINK spdk_nvme_identify 00:03:44.978 LINK spdk_nvme 00:03:44.978 LINK lsvmd 00:03:44.978 LINK hello_sock 00:03:44.978 LINK led 00:03:44.978 LINK spdk_top 00:03:45.237 CC test/app/bdev_svc/bdev_svc.o 00:03:45.237 CC app/vhost/vhost.o 00:03:45.237 TEST_HEADER include/spdk/accel.h 00:03:45.237 TEST_HEADER include/spdk/accel_module.h 00:03:45.237 TEST_HEADER include/spdk/assert.h 00:03:45.237 TEST_HEADER include/spdk/barrier.h 00:03:45.237 TEST_HEADER include/spdk/base64.h 00:03:45.237 TEST_HEADER include/spdk/bdev.h 00:03:45.237 TEST_HEADER include/spdk/bdev_module.h 00:03:45.237 TEST_HEADER include/spdk/bdev_zone.h 00:03:45.237 TEST_HEADER include/spdk/bit_array.h 00:03:45.237 TEST_HEADER include/spdk/bit_pool.h 00:03:45.237 TEST_HEADER include/spdk/blob_bdev.h 00:03:45.237 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:45.237 TEST_HEADER include/spdk/blobfs.h 00:03:45.237 TEST_HEADER include/spdk/blob.h 00:03:45.237 TEST_HEADER include/spdk/conf.h 00:03:45.237 TEST_HEADER include/spdk/config.h 00:03:45.237 CC examples/idxd/perf/perf.o 00:03:45.237 TEST_HEADER include/spdk/cpuset.h 00:03:45.237 TEST_HEADER include/spdk/crc16.h 00:03:45.237 TEST_HEADER include/spdk/crc32.h 00:03:45.237 TEST_HEADER include/spdk/crc64.h 00:03:45.237 TEST_HEADER include/spdk/dif.h 00:03:45.237 TEST_HEADER include/spdk/dma.h 00:03:45.237 TEST_HEADER include/spdk/endian.h 00:03:45.237 TEST_HEADER include/spdk/env_dpdk.h 00:03:45.237 TEST_HEADER include/spdk/env.h 00:03:45.237 TEST_HEADER include/spdk/event.h 00:03:45.237 TEST_HEADER include/spdk/fd_group.h 00:03:45.237 TEST_HEADER include/spdk/fd.h 00:03:45.237 TEST_HEADER include/spdk/file.h 00:03:45.237 TEST_HEADER include/spdk/ftl.h 00:03:45.237 TEST_HEADER include/spdk/gpt_spec.h 00:03:45.237 TEST_HEADER include/spdk/hexlify.h 00:03:45.237 TEST_HEADER include/spdk/histogram_data.h 00:03:45.237 TEST_HEADER include/spdk/idxd.h 00:03:45.237 TEST_HEADER include/spdk/idxd_spec.h 00:03:45.237 TEST_HEADER include/spdk/init.h 00:03:45.237 TEST_HEADER include/spdk/ioat.h 00:03:45.237 TEST_HEADER include/spdk/ioat_spec.h 00:03:45.237 TEST_HEADER include/spdk/iscsi_spec.h 00:03:45.237 TEST_HEADER include/spdk/json.h 00:03:45.237 TEST_HEADER include/spdk/jsonrpc.h 00:03:45.237 TEST_HEADER include/spdk/keyring.h 00:03:45.237 TEST_HEADER include/spdk/keyring_module.h 00:03:45.237 TEST_HEADER include/spdk/likely.h 00:03:45.237 TEST_HEADER include/spdk/log.h 00:03:45.237 TEST_HEADER include/spdk/lvol.h 00:03:45.237 TEST_HEADER include/spdk/memory.h 00:03:45.237 TEST_HEADER include/spdk/mmio.h 00:03:45.237 TEST_HEADER include/spdk/nbd.h 00:03:45.237 TEST_HEADER include/spdk/net.h 00:03:45.237 TEST_HEADER include/spdk/notify.h 00:03:45.237 TEST_HEADER include/spdk/nvme.h 00:03:45.238 CC test/blobfs/mkfs/mkfs.o 00:03:45.496 TEST_HEADER include/spdk/nvme_intel.h 00:03:45.496 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:45.496 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:45.496 LINK spdk_bdev 00:03:45.496 TEST_HEADER include/spdk/nvme_spec.h 00:03:45.496 TEST_HEADER include/spdk/nvme_zns.h 00:03:45.496 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:45.496 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:45.496 TEST_HEADER include/spdk/nvmf.h 00:03:45.496 TEST_HEADER include/spdk/nvmf_spec.h 00:03:45.496 TEST_HEADER include/spdk/nvmf_transport.h 00:03:45.496 TEST_HEADER include/spdk/opal.h 00:03:45.496 TEST_HEADER include/spdk/opal_spec.h 00:03:45.496 TEST_HEADER include/spdk/pci_ids.h 00:03:45.496 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:45.496 TEST_HEADER include/spdk/pipe.h 00:03:45.496 TEST_HEADER include/spdk/queue.h 00:03:45.496 TEST_HEADER include/spdk/reduce.h 00:03:45.496 TEST_HEADER include/spdk/rpc.h 00:03:45.496 TEST_HEADER include/spdk/scheduler.h 00:03:45.496 TEST_HEADER include/spdk/scsi.h 00:03:45.496 TEST_HEADER include/spdk/scsi_spec.h 00:03:45.496 TEST_HEADER include/spdk/sock.h 00:03:45.496 TEST_HEADER include/spdk/stdinc.h 00:03:45.496 TEST_HEADER include/spdk/string.h 00:03:45.496 TEST_HEADER include/spdk/thread.h 00:03:45.496 TEST_HEADER include/spdk/trace.h 00:03:45.496 TEST_HEADER include/spdk/trace_parser.h 00:03:45.496 TEST_HEADER include/spdk/tree.h 00:03:45.496 TEST_HEADER include/spdk/ublk.h 00:03:45.496 TEST_HEADER include/spdk/util.h 00:03:45.496 TEST_HEADER include/spdk/uuid.h 00:03:45.496 TEST_HEADER include/spdk/version.h 00:03:45.496 LINK bdev_svc 00:03:45.496 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:45.496 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:45.496 TEST_HEADER include/spdk/vhost.h 00:03:45.496 TEST_HEADER include/spdk/vmd.h 00:03:45.496 TEST_HEADER include/spdk/xor.h 00:03:45.496 CC examples/accel/perf/accel_perf.o 00:03:45.496 TEST_HEADER include/spdk/zipf.h 00:03:45.496 CXX test/cpp_headers/accel.o 00:03:45.496 LINK vhost 00:03:45.496 CC examples/blob/hello_world/hello_blob.o 00:03:45.496 LINK mkfs 00:03:45.496 CC test/env/mem_callbacks/mem_callbacks.o 00:03:45.755 LINK idxd_perf 00:03:45.755 CC examples/blob/cli/blobcli.o 00:03:45.755 CXX test/cpp_headers/accel_module.o 00:03:45.755 CXX test/cpp_headers/assert.o 00:03:45.755 CXX test/cpp_headers/barrier.o 00:03:45.755 LINK hello_blob 00:03:45.755 LINK nvme_fuzz 00:03:46.015 CC test/app/histogram_perf/histogram_perf.o 00:03:46.015 CXX test/cpp_headers/base64.o 00:03:46.015 CC test/app/jsoncat/jsoncat.o 00:03:46.015 LINK accel_perf 00:03:46.015 CC test/app/stub/stub.o 00:03:46.015 CXX test/cpp_headers/bdev.o 00:03:46.015 CC test/event/event_perf/event_perf.o 00:03:46.015 LINK histogram_perf 00:03:46.015 LINK jsoncat 00:03:46.015 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:46.274 LINK blobcli 00:03:46.274 CXX test/cpp_headers/bdev_module.o 00:03:46.274 LINK event_perf 00:03:46.274 LINK stub 00:03:46.274 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:46.274 LINK mem_callbacks 00:03:46.274 CC test/env/vtophys/vtophys.o 00:03:46.274 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:46.274 CC test/env/memory/memory_ut.o 00:03:46.274 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.533 CXX test/cpp_headers/bdev_zone.o 00:03:46.533 CC test/event/reactor/reactor.o 00:03:46.533 LINK vtophys 00:03:46.533 CC test/env/pci/pci_ut.o 00:03:46.533 LINK env_dpdk_post_init 00:03:46.533 CC examples/nvme/hello_world/hello_world.o 00:03:46.533 CXX test/cpp_headers/bit_array.o 00:03:46.533 LINK reactor 00:03:46.792 CC examples/bdev/hello_world/hello_bdev.o 00:03:46.792 CXX test/cpp_headers/bit_pool.o 00:03:46.792 LINK vhost_fuzz 00:03:46.792 CXX test/cpp_headers/blob_bdev.o 00:03:46.792 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:46.792 CC examples/nvme/reconnect/reconnect.o 00:03:46.792 CC test/event/reactor_perf/reactor_perf.o 00:03:46.792 LINK hello_world 00:03:47.051 LINK hello_bdev 00:03:47.051 LINK pci_ut 00:03:47.051 CXX test/cpp_headers/blobfs_bdev.o 00:03:47.051 CC test/event/app_repeat/app_repeat.o 00:03:47.051 LINK reactor_perf 00:03:47.310 CC examples/nvme/arbitration/arbitration.o 00:03:47.310 LINK reconnect 00:03:47.310 CXX test/cpp_headers/blobfs.o 00:03:47.310 LINK app_repeat 00:03:47.310 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.310 CXX test/cpp_headers/blob.o 00:03:47.310 LINK nvme_manage 00:03:47.568 CXX test/cpp_headers/conf.o 00:03:47.568 CC test/lvol/esnap/esnap.o 00:03:47.568 LINK memory_ut 00:03:47.568 CC test/event/scheduler/scheduler.o 00:03:47.568 LINK arbitration 00:03:47.568 CXX test/cpp_headers/config.o 00:03:47.825 CC test/rpc_client/rpc_client_test.o 00:03:47.825 CXX test/cpp_headers/cpuset.o 00:03:47.825 CC test/nvme/aer/aer.o 00:03:47.825 CC test/accel/dif/dif.o 00:03:47.825 CXX test/cpp_headers/crc16.o 00:03:47.825 CXX test/cpp_headers/crc32.o 00:03:47.825 LINK scheduler 00:03:47.825 LINK iscsi_fuzz 00:03:47.825 LINK rpc_client_test 00:03:48.082 CC examples/nvme/hotplug/hotplug.o 00:03:48.082 LINK aer 00:03:48.082 CXX test/cpp_headers/crc64.o 00:03:48.082 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:48.082 LINK bdevperf 00:03:48.339 CC examples/nvme/abort/abort.o 00:03:48.339 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.339 CXX test/cpp_headers/dif.o 00:03:48.339 LINK hotplug 00:03:48.339 CC test/nvme/reset/reset.o 00:03:48.339 LINK dif 00:03:48.339 LINK cmb_copy 00:03:48.339 CC test/nvme/sgl/sgl.o 00:03:48.339 CXX test/cpp_headers/dma.o 00:03:48.339 LINK pmr_persistence 00:03:48.339 CXX test/cpp_headers/endian.o 00:03:48.597 CC test/nvme/e2edp/nvme_dp.o 00:03:48.597 CXX test/cpp_headers/env_dpdk.o 00:03:48.597 LINK reset 00:03:48.597 LINK abort 00:03:48.597 CXX test/cpp_headers/env.o 00:03:48.597 LINK sgl 00:03:48.597 CC test/nvme/overhead/overhead.o 00:03:48.856 CXX test/cpp_headers/event.o 00:03:48.856 CC test/nvme/err_injection/err_injection.o 00:03:48.856 CC test/bdev/bdevio/bdevio.o 00:03:48.856 LINK nvme_dp 00:03:48.856 CC test/nvme/startup/startup.o 00:03:48.856 CXX test/cpp_headers/fd_group.o 00:03:48.856 CC test/nvme/reserve/reserve.o 00:03:48.856 LINK err_injection 00:03:49.114 CC test/nvme/simple_copy/simple_copy.o 00:03:49.114 LINK overhead 00:03:49.114 LINK startup 00:03:49.114 CC test/nvme/connect_stress/connect_stress.o 00:03:49.114 CC examples/nvmf/nvmf/nvmf.o 00:03:49.114 CXX test/cpp_headers/fd.o 00:03:49.114 LINK reserve 00:03:49.114 LINK bdevio 00:03:49.372 CC test/nvme/boot_partition/boot_partition.o 00:03:49.372 CXX test/cpp_headers/file.o 00:03:49.372 LINK connect_stress 00:03:49.372 LINK simple_copy 00:03:49.372 CC test/nvme/compliance/nvme_compliance.o 00:03:49.372 CC test/nvme/fused_ordering/fused_ordering.o 00:03:49.372 LINK nvmf 00:03:49.372 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:49.372 CXX test/cpp_headers/ftl.o 00:03:49.372 LINK boot_partition 00:03:49.372 CXX test/cpp_headers/gpt_spec.o 00:03:49.629 CC test/nvme/fdp/fdp.o 00:03:49.629 CC test/nvme/cuse/cuse.o 00:03:49.629 LINK fused_ordering 00:03:49.629 LINK nvme_compliance 00:03:49.629 CXX test/cpp_headers/hexlify.o 00:03:49.629 CXX test/cpp_headers/histogram_data.o 00:03:49.629 CXX test/cpp_headers/idxd.o 00:03:49.629 CXX test/cpp_headers/idxd_spec.o 00:03:49.629 LINK doorbell_aers 00:03:49.887 CXX test/cpp_headers/init.o 00:03:49.887 CXX test/cpp_headers/ioat.o 00:03:49.887 CXX test/cpp_headers/ioat_spec.o 00:03:49.887 CXX test/cpp_headers/iscsi_spec.o 00:03:49.887 CXX test/cpp_headers/json.o 00:03:49.887 CXX test/cpp_headers/jsonrpc.o 00:03:49.887 CXX test/cpp_headers/keyring.o 00:03:49.887 LINK fdp 00:03:49.887 CXX test/cpp_headers/keyring_module.o 00:03:49.887 CXX test/cpp_headers/likely.o 00:03:49.887 CXX test/cpp_headers/log.o 00:03:50.145 CXX test/cpp_headers/lvol.o 00:03:50.145 CXX test/cpp_headers/memory.o 00:03:50.145 CXX test/cpp_headers/mmio.o 00:03:50.145 CXX test/cpp_headers/nbd.o 00:03:50.145 CXX test/cpp_headers/net.o 00:03:50.145 CXX test/cpp_headers/notify.o 00:03:50.145 CXX test/cpp_headers/nvme.o 00:03:50.145 CXX test/cpp_headers/nvme_intel.o 00:03:50.145 CXX test/cpp_headers/nvme_ocssd.o 00:03:50.145 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:50.145 CXX test/cpp_headers/nvme_spec.o 00:03:50.145 CXX test/cpp_headers/nvme_zns.o 00:03:50.145 CXX test/cpp_headers/nvmf_cmd.o 00:03:50.404 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:50.404 CXX test/cpp_headers/nvmf.o 00:03:50.404 CXX test/cpp_headers/nvmf_spec.o 00:03:50.404 CXX test/cpp_headers/nvmf_transport.o 00:03:50.404 CXX test/cpp_headers/opal.o 00:03:50.404 CXX test/cpp_headers/opal_spec.o 00:03:50.404 CXX test/cpp_headers/pci_ids.o 00:03:50.404 CXX test/cpp_headers/pipe.o 00:03:50.663 CXX test/cpp_headers/queue.o 00:03:50.663 CXX test/cpp_headers/reduce.o 00:03:50.663 CXX test/cpp_headers/rpc.o 00:03:50.663 CXX test/cpp_headers/scheduler.o 00:03:50.663 CXX test/cpp_headers/scsi.o 00:03:50.663 CXX test/cpp_headers/scsi_spec.o 00:03:50.663 CXX test/cpp_headers/sock.o 00:03:50.663 CXX test/cpp_headers/stdinc.o 00:03:50.663 CXX test/cpp_headers/string.o 00:03:50.663 CXX test/cpp_headers/thread.o 00:03:50.663 CXX test/cpp_headers/trace.o 00:03:50.663 CXX test/cpp_headers/trace_parser.o 00:03:50.921 CXX test/cpp_headers/tree.o 00:03:50.921 CXX test/cpp_headers/ublk.o 00:03:50.921 CXX test/cpp_headers/util.o 00:03:50.921 CXX test/cpp_headers/uuid.o 00:03:50.921 CXX test/cpp_headers/version.o 00:03:50.921 CXX test/cpp_headers/vfio_user_pci.o 00:03:50.921 CXX test/cpp_headers/vfio_user_spec.o 00:03:50.921 CXX test/cpp_headers/vhost.o 00:03:50.921 CXX test/cpp_headers/vmd.o 00:03:50.921 CXX test/cpp_headers/xor.o 00:03:50.921 CXX test/cpp_headers/zipf.o 00:03:51.179 LINK cuse 00:03:53.082 LINK esnap 00:03:53.650 00:03:53.650 real 1m5.150s 00:03:53.650 user 6m37.167s 00:03:53.650 sys 1m37.693s 00:03:53.650 13:46:42 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:53.650 ************************************ 00:03:53.650 END TEST make 00:03:53.650 ************************************ 00:03:53.650 13:46:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:53.650 13:46:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:53.650 13:46:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:53.650 13:46:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:53.650 13:46:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.650 13:46:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:53.650 13:46:42 -- pm/common@44 -- $ pid=5136 00:03:53.650 13:46:42 -- pm/common@50 -- $ kill -TERM 5136 00:03:53.650 13:46:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.650 13:46:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:53.650 13:46:42 -- pm/common@44 -- $ pid=5138 00:03:53.650 13:46:42 -- pm/common@50 -- $ kill -TERM 5138 00:03:53.650 13:46:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:53.650 13:46:42 -- nvmf/common.sh@7 -- # uname -s 00:03:53.650 13:46:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.650 13:46:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.650 13:46:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.650 13:46:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.650 13:46:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.650 13:46:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.650 13:46:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.650 13:46:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.650 13:46:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.650 13:46:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.650 13:46:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:03:53.650 13:46:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:03:53.650 13:46:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.650 13:46:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.650 13:46:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:53.650 13:46:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.650 13:46:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:53.650 13:46:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.650 13:46:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.650 13:46:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.650 13:46:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.650 13:46:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.650 13:46:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.650 13:46:42 -- paths/export.sh@5 -- # export PATH 00:03:53.650 13:46:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.650 13:46:42 -- nvmf/common.sh@47 -- # : 0 00:03:53.650 13:46:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:53.651 13:46:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:53.651 13:46:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:53.651 13:46:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.651 13:46:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.651 13:46:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:53.651 13:46:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:53.651 13:46:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:53.651 13:46:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:53.651 13:46:42 -- spdk/autotest.sh@32 -- # uname -s 00:03:53.651 13:46:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:53.651 13:46:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:53.651 13:46:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:53.651 13:46:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:53.651 13:46:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:53.651 13:46:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:53.651 13:46:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:53.651 13:46:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:53.651 13:46:42 -- spdk/autotest.sh@48 -- # udevadm_pid=52792 00:03:53.651 13:46:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:53.651 13:46:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:53.651 13:46:42 -- pm/common@17 -- # local monitor 00:03:53.651 13:46:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.651 13:46:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.651 13:46:42 -- pm/common@21 -- # date +%s 00:03:53.651 13:46:42 -- pm/common@25 -- # sleep 1 00:03:53.651 13:46:42 -- pm/common@21 -- # date +%s 00:03:53.651 13:46:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721915202 00:03:53.651 13:46:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721915202 00:03:53.651 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721915202_collect-cpu-load.pm.log 00:03:53.910 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721915202_collect-vmstat.pm.log 00:03:54.842 13:46:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:54.842 13:46:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:54.842 13:46:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.842 13:46:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.842 13:46:43 -- spdk/autotest.sh@59 -- # create_test_list 00:03:54.842 13:46:43 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:54.842 13:46:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.842 13:46:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:54.842 13:46:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:54.842 13:46:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:54.842 13:46:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:54.842 13:46:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:54.842 13:46:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:54.842 13:46:43 -- common/autotest_common.sh@1455 -- # uname 00:03:54.842 13:46:43 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:54.842 13:46:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:54.842 13:46:43 -- common/autotest_common.sh@1475 -- # uname 00:03:54.842 13:46:43 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:54.842 13:46:43 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:54.842 13:46:43 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:54.842 13:46:43 -- spdk/autotest.sh@72 -- # hash lcov 00:03:54.843 13:46:43 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:54.843 13:46:43 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:54.843 --rc lcov_branch_coverage=1 00:03:54.843 --rc lcov_function_coverage=1 00:03:54.843 --rc genhtml_branch_coverage=1 00:03:54.843 --rc genhtml_function_coverage=1 00:03:54.843 --rc genhtml_legend=1 00:03:54.843 --rc geninfo_all_blocks=1 00:03:54.843 ' 00:03:54.843 13:46:43 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:54.843 --rc lcov_branch_coverage=1 00:03:54.843 --rc lcov_function_coverage=1 00:03:54.843 --rc genhtml_branch_coverage=1 00:03:54.843 --rc genhtml_function_coverage=1 00:03:54.843 --rc genhtml_legend=1 00:03:54.843 --rc geninfo_all_blocks=1 00:03:54.843 ' 00:03:54.843 13:46:43 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:54.843 --rc lcov_branch_coverage=1 00:03:54.843 --rc lcov_function_coverage=1 00:03:54.843 --rc genhtml_branch_coverage=1 00:03:54.843 --rc genhtml_function_coverage=1 00:03:54.843 --rc genhtml_legend=1 00:03:54.843 --rc geninfo_all_blocks=1 00:03:54.843 --no-external' 00:03:54.843 13:46:43 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:54.843 --rc lcov_branch_coverage=1 00:03:54.843 --rc lcov_function_coverage=1 00:03:54.843 --rc genhtml_branch_coverage=1 00:03:54.843 --rc genhtml_function_coverage=1 00:03:54.843 --rc genhtml_legend=1 00:03:54.843 --rc geninfo_all_blocks=1 00:03:54.843 --no-external' 00:03:54.843 13:46:43 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:54.843 lcov: LCOV version 1.14 00:03:54.843 13:46:43 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:12.928 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:12.928 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:22.937 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:22.937 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:23.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:23.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:23.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:23.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:23.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:23.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:23.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:23.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:23.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:23.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:23.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:23.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:23.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:27.662 13:47:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:27.662 13:47:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.662 13:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:27.662 13:47:15 -- spdk/autotest.sh@91 -- # rm -f 00:04:27.662 13:47:15 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.662 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:27.662 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:27.662 13:47:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:27.662 13:47:16 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:27.662 13:47:16 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:27.662 13:47:16 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:27.662 13:47:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.662 13:47:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:27.662 13:47:16 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:27.662 13:47:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.662 13:47:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.662 13:47:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.662 13:47:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:27.662 13:47:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:27.662 13:47:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:27.662 13:47:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.662 13:47:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.662 13:47:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:27.662 13:47:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:27.662 13:47:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:27.662 13:47:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.662 13:47:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.662 13:47:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:27.662 13:47:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:27.662 13:47:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:27.662 13:47:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.662 13:47:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:27.662 13:47:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.662 13:47:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:27.662 13:47:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:27.662 13:47:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:27.662 13:47:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:27.662 No valid GPT data, bailing 00:04:27.662 13:47:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.920 13:47:16 -- scripts/common.sh@391 -- # pt= 00:04:27.920 13:47:16 -- scripts/common.sh@392 -- # return 1 00:04:27.920 13:47:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:27.920 1+0 records in 00:04:27.920 1+0 records out 00:04:27.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427006 s, 246 MB/s 00:04:27.920 13:47:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.920 13:47:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:27.920 13:47:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:27.920 13:47:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:27.920 13:47:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:27.920 No valid GPT data, bailing 00:04:27.920 13:47:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:27.920 13:47:16 -- scripts/common.sh@391 -- # pt= 00:04:27.920 13:47:16 -- scripts/common.sh@392 -- # return 1 00:04:27.920 13:47:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:27.920 1+0 records in 00:04:27.920 1+0 records out 00:04:27.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360047 s, 291 MB/s 00:04:27.921 13:47:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.921 13:47:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:27.921 13:47:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:27.921 13:47:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:27.921 13:47:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:27.921 No valid GPT data, bailing 00:04:27.921 13:47:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:27.921 13:47:16 -- scripts/common.sh@391 -- # pt= 00:04:27.921 13:47:16 -- scripts/common.sh@392 -- # return 1 00:04:27.921 13:47:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:27.921 1+0 records in 00:04:27.921 1+0 records out 00:04:27.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361352 s, 290 MB/s 00:04:27.921 13:47:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.921 13:47:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:27.921 13:47:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:27.921 13:47:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:27.921 13:47:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:27.921 No valid GPT data, bailing 00:04:27.921 13:47:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:27.921 13:47:16 -- scripts/common.sh@391 -- # pt= 00:04:27.921 13:47:16 -- scripts/common.sh@392 -- # return 1 00:04:27.921 13:47:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:27.921 1+0 records in 00:04:27.921 1+0 records out 00:04:27.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00321919 s, 326 MB/s 00:04:27.921 13:47:16 -- spdk/autotest.sh@118 -- # sync 00:04:28.178 13:47:16 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:28.178 13:47:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:28.178 13:47:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:30.076 13:47:18 -- spdk/autotest.sh@124 -- # uname -s 00:04:30.076 13:47:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:30.076 13:47:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:30.076 13:47:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.076 13:47:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.076 13:47:18 -- common/autotest_common.sh@10 -- # set +x 00:04:30.076 ************************************ 00:04:30.076 START TEST setup.sh 00:04:30.076 ************************************ 00:04:30.076 13:47:18 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:30.076 * Looking for test storage... 00:04:30.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:30.076 13:47:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:30.076 13:47:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:30.076 13:47:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:30.076 13:47:18 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.076 13:47:18 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.076 13:47:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.076 ************************************ 00:04:30.076 START TEST acl 00:04:30.076 ************************************ 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:30.076 * Looking for test storage... 00:04:30.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:30.076 13:47:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:30.076 13:47:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.076 13:47:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:30.076 13:47:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:30.076 13:47:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:30.076 13:47:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:30.076 13:47:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:30.076 13:47:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.076 13:47:18 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.015 13:47:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:31.015 13:47:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:31.015 13:47:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.015 13:47:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:31.015 13:47:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.015 13:47:19 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.582 Hugepages 00:04:31.582 node hugesize free / total 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.582 00:04:31.582 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:31.582 13:47:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:31.582 13:47:20 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.582 13:47:20 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.582 13:47:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:31.582 ************************************ 00:04:31.582 START TEST denied 00:04:31.582 ************************************ 00:04:31.582 13:47:20 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:31.582 13:47:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:31.582 13:47:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:31.582 13:47:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:31.582 13:47:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.582 13:47:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.515 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.515 13:47:21 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.081 00:04:33.081 real 0m1.473s 00:04:33.081 user 0m0.586s 00:04:33.081 sys 0m0.816s 00:04:33.081 13:47:22 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.081 ************************************ 00:04:33.081 END TEST denied 00:04:33.081 ************************************ 00:04:33.081 13:47:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:33.081 13:47:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:33.081 13:47:22 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.081 13:47:22 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.081 13:47:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.081 ************************************ 00:04:33.081 START TEST allowed 00:04:33.081 ************************************ 00:04:33.081 13:47:22 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:33.081 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:33.081 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:33.081 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:33.081 13:47:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.081 13:47:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.015 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.015 13:47:22 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.623 00:04:34.623 real 0m1.529s 00:04:34.623 user 0m0.658s 00:04:34.623 sys 0m0.856s 00:04:34.623 13:47:23 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.623 ************************************ 00:04:34.623 13:47:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:34.623 END TEST allowed 00:04:34.623 ************************************ 00:04:34.623 ************************************ 00:04:34.623 END TEST acl 00:04:34.623 ************************************ 00:04:34.623 00:04:34.623 real 0m4.802s 00:04:34.623 user 0m2.101s 00:04:34.623 sys 0m2.628s 00:04:34.623 13:47:23 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.623 13:47:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:34.882 13:47:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:34.882 13:47:23 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.882 13:47:23 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.882 13:47:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.882 ************************************ 00:04:34.882 START TEST hugepages 00:04:34.882 ************************************ 00:04:34.882 13:47:23 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:34.882 * Looking for test storage... 00:04:34.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:34.882 13:47:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6031964 kB' 'MemAvailable: 7413684 kB' 'Buffers: 2436 kB' 'Cached: 1595940 kB' 'SwapCached: 0 kB' 'Active: 436184 kB' 'Inactive: 1267028 kB' 'Active(anon): 115324 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 106476 kB' 'Mapped: 48564 kB' 'Shmem: 10488 kB' 'KReclaimable: 61544 kB' 'Slab: 132460 kB' 'SReclaimable: 61544 kB' 'SUnreclaim: 70916 kB' 'KernelStack: 6332 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.883 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:34.884 13:47:23 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:34.884 13:47:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.884 13:47:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.884 13:47:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.884 ************************************ 00:04:34.884 START TEST default_setup 00:04:34.884 ************************************ 00:04:34.884 13:47:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:34.884 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:34.884 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.884 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.884 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:34.884 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.884 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.884 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.885 13:47:23 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.824 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.824 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8117748 kB' 'MemAvailable: 9499292 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 452768 kB' 'Inactive: 1267040 kB' 'Active(anon): 131908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 131960 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70796 kB' 'KernelStack: 6256 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.825 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8118020 kB' 'MemAvailable: 9499564 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 452928 kB' 'Inactive: 1267040 kB' 'Active(anon): 132068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 131964 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70800 kB' 'KernelStack: 6208 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8117796 kB' 'MemAvailable: 9499340 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 452624 kB' 'Inactive: 1267040 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122852 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132000 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70836 kB' 'KernelStack: 6240 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.828 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.829 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:35.830 nr_hugepages=1024 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.830 resv_hugepages=0 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.830 surplus_hugepages=0 00:04:35.830 anon_hugepages=0 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8117796 kB' 'MemAvailable: 9499340 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 452756 kB' 'Inactive: 1267040 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122984 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132000 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70836 kB' 'KernelStack: 6276 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.830 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.831 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8117544 kB' 'MemUsed: 4124436 kB' 'SwapCached: 0 kB' 'Active: 452616 kB' 'Inactive: 1267040 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1598368 kB' 'Mapped: 48568 kB' 'AnonPages: 122920 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 131992 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:35.832 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.092 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.093 node0=1024 expecting 1024 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.093 ************************************ 00:04:36.093 END TEST default_setup 00:04:36.093 ************************************ 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.093 00:04:36.093 real 0m1.018s 00:04:36.093 user 0m0.468s 00:04:36.093 sys 0m0.477s 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.093 13:47:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:36.093 13:47:24 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:36.093 13:47:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.093 13:47:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.093 13:47:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.093 ************************************ 00:04:36.093 START TEST per_node_1G_alloc 00:04:36.093 ************************************ 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.093 13:47:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.353 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.353 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9160360 kB' 'MemAvailable: 10541908 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 452864 kB' 'Inactive: 1267044 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123112 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 131984 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70820 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.353 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.354 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9160112 kB' 'MemAvailable: 10541660 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 452700 kB' 'Inactive: 1267044 kB' 'Active(anon): 131840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122952 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 131984 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70820 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.355 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.617 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.618 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9160112 kB' 'MemAvailable: 10541660 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 452668 kB' 'Inactive: 1267044 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122960 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 131980 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70816 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.619 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.620 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.621 nr_hugepages=512 00:04:36.621 resv_hugepages=0 00:04:36.621 surplus_hugepages=0 00:04:36.621 anon_hugepages=0 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9160112 kB' 'MemAvailable: 10541660 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 452592 kB' 'Inactive: 1267044 kB' 'Active(anon): 131732 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 131980 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70816 kB' 'KernelStack: 6256 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.621 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.622 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9160112 kB' 'MemUsed: 3081868 kB' 'SwapCached: 0 kB' 'Active: 452604 kB' 'Inactive: 1267044 kB' 'Active(anon): 131744 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1598368 kB' 'Mapped: 48568 kB' 'AnonPages: 122852 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 131980 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.623 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.624 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.625 node0=512 expecting 512 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:36.625 ************************************ 00:04:36.625 END TEST per_node_1G_alloc 00:04:36.625 ************************************ 00:04:36.625 00:04:36.625 real 0m0.590s 00:04:36.625 user 0m0.275s 00:04:36.625 sys 0m0.305s 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.625 13:47:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.625 13:47:25 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:36.625 13:47:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.625 13:47:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.625 13:47:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.625 ************************************ 00:04:36.625 START TEST even_2G_alloc 00:04:36.625 ************************************ 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.625 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.147 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:37.147 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.147 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8108560 kB' 'MemAvailable: 9490108 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 453064 kB' 'Inactive: 1267044 kB' 'Active(anon): 132204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123332 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132044 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70880 kB' 'KernelStack: 6312 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.148 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8108668 kB' 'MemAvailable: 9490216 kB' 'Buffers: 2436 kB' 'Cached: 1595932 kB' 'SwapCached: 0 kB' 'Active: 453172 kB' 'Inactive: 1267044 kB' 'Active(anon): 132312 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132072 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70908 kB' 'KernelStack: 6288 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.149 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.150 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8108668 kB' 'MemAvailable: 9490220 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 453188 kB' 'Inactive: 1267048 kB' 'Active(anon): 132328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123492 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132068 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70904 kB' 'KernelStack: 6240 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.151 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.152 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.153 nr_hugepages=1024 00:04:37.153 resv_hugepages=0 00:04:37.153 surplus_hugepages=0 00:04:37.153 anon_hugepages=0 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8108668 kB' 'MemAvailable: 9490220 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452824 kB' 'Inactive: 1267048 kB' 'Active(anon): 131964 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122880 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132068 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70904 kB' 'KernelStack: 6240 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.153 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.154 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8108668 kB' 'MemUsed: 4133312 kB' 'SwapCached: 0 kB' 'Active: 452640 kB' 'Inactive: 1267048 kB' 'Active(anon): 131780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1598372 kB' 'Mapped: 48568 kB' 'AnonPages: 122964 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132068 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.155 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.156 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.157 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.157 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:37.157 node0=1024 expecting 1024 00:04:37.157 13:47:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:37.157 00:04:37.157 real 0m0.584s 00:04:37.157 user 0m0.284s 00:04:37.157 sys 0m0.303s 00:04:37.157 ************************************ 00:04:37.157 END TEST even_2G_alloc 00:04:37.157 ************************************ 00:04:37.157 13:47:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.157 13:47:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.415 13:47:26 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:37.416 13:47:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.416 13:47:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.416 13:47:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.416 ************************************ 00:04:37.416 START TEST odd_alloc 00:04:37.416 ************************************ 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.416 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.678 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:37.678 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8111160 kB' 'MemAvailable: 9492712 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452680 kB' 'Inactive: 1267048 kB' 'Active(anon): 131820 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123256 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132108 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70944 kB' 'KernelStack: 6276 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.678 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.679 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110908 kB' 'MemAvailable: 9492460 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452600 kB' 'Inactive: 1267048 kB' 'Active(anon): 131740 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122872 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132108 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70944 kB' 'KernelStack: 6256 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.680 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110908 kB' 'MemAvailable: 9492460 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452428 kB' 'Inactive: 1267048 kB' 'Active(anon): 131568 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122960 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132108 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70944 kB' 'KernelStack: 6272 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.681 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.682 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:37.683 nr_hugepages=1025 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.683 resv_hugepages=0 00:04:37.683 surplus_hugepages=0 00:04:37.683 anon_hugepages=0 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.683 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.964 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.964 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110908 kB' 'MemAvailable: 9492460 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452652 kB' 'Inactive: 1267048 kB' 'Active(anon): 131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132096 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70932 kB' 'KernelStack: 6272 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.965 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8110908 kB' 'MemUsed: 4131072 kB' 'SwapCached: 0 kB' 'Active: 452376 kB' 'Inactive: 1267048 kB' 'Active(anon): 131516 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1598372 kB' 'Mapped: 48568 kB' 'AnonPages: 122960 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132096 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.966 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.967 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.968 node0=1025 expecting 1025 00:04:37.968 ************************************ 00:04:37.968 END TEST odd_alloc 00:04:37.968 ************************************ 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:37.968 00:04:37.968 real 0m0.564s 00:04:37.968 user 0m0.264s 00:04:37.968 sys 0m0.298s 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.968 13:47:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.968 13:47:26 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:37.968 13:47:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.968 13:47:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.968 13:47:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.968 ************************************ 00:04:37.968 START TEST custom_alloc 00:04:37.968 ************************************ 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.968 13:47:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.261 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:38.261 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9161696 kB' 'MemAvailable: 10543248 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 453304 kB' 'Inactive: 1267048 kB' 'Active(anon): 132444 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123520 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132092 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70928 kB' 'KernelStack: 6260 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.261 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9161696 kB' 'MemAvailable: 10543248 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1267048 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132084 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70920 kB' 'KernelStack: 6272 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.262 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9161696 kB' 'MemAvailable: 10543248 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452464 kB' 'Inactive: 1267048 kB' 'Active(anon): 131604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122992 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132080 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70916 kB' 'KernelStack: 6272 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.264 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.265 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.527 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:38.528 nr_hugepages=512 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:38.528 resv_hugepages=0 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.528 surplus_hugepages=0 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.528 anon_hugepages=0 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9163248 kB' 'MemAvailable: 10544800 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452504 kB' 'Inactive: 1267048 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123036 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132076 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70912 kB' 'KernelStack: 6288 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.528 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.529 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9163508 kB' 'MemUsed: 3078472 kB' 'SwapCached: 0 kB' 'Active: 452788 kB' 'Inactive: 1267048 kB' 'Active(anon): 131928 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1598372 kB' 'Mapped: 48568 kB' 'AnonPages: 123088 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132076 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.530 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.531 node0=512 expecting 512 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:38.531 13:47:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:38.531 00:04:38.531 real 0m0.539s 00:04:38.531 user 0m0.277s 00:04:38.531 sys 0m0.290s 00:04:38.532 13:47:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.532 13:47:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:38.532 ************************************ 00:04:38.532 END TEST custom_alloc 00:04:38.532 ************************************ 00:04:38.532 13:47:27 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:38.532 13:47:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.532 13:47:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.532 13:47:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.532 ************************************ 00:04:38.532 START TEST no_shrink_alloc 00:04:38.532 ************************************ 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.532 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.790 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:38.790 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8118112 kB' 'MemAvailable: 9499664 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 453008 kB' 'Inactive: 1267048 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123296 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132060 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70896 kB' 'KernelStack: 6296 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.052 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.053 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8118740 kB' 'MemAvailable: 9500292 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452724 kB' 'Inactive: 1267048 kB' 'Active(anon): 131864 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122984 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132060 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70896 kB' 'KernelStack: 6256 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.054 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.055 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8118740 kB' 'MemAvailable: 9500292 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452772 kB' 'Inactive: 1267048 kB' 'Active(anon): 131912 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123020 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132060 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70896 kB' 'KernelStack: 6272 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.056 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.057 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.058 nr_hugepages=1024 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.058 resv_hugepages=0 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.058 surplus_hugepages=0 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.058 anon_hugepages=0 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8118740 kB' 'MemAvailable: 9500292 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 452492 kB' 'Inactive: 1267048 kB' 'Active(anon): 131632 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122744 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132060 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70896 kB' 'KernelStack: 6240 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.058 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.059 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8121124 kB' 'MemUsed: 4120856 kB' 'SwapCached: 0 kB' 'Active: 448632 kB' 'Inactive: 1267048 kB' 'Active(anon): 127772 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1598372 kB' 'Mapped: 47948 kB' 'AnonPages: 118612 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132020 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 70856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.060 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.061 node0=1024 expecting 1024 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.061 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.062 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:39.062 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:39.062 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:39.062 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.062 13:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.320 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.320 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.585 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8124464 kB' 'MemAvailable: 9506012 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 448328 kB' 'Inactive: 1267048 kB' 'Active(anon): 127468 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118616 kB' 'Mapped: 47964 kB' 'Shmem: 10464 kB' 'KReclaimable: 61156 kB' 'Slab: 131852 kB' 'SReclaimable: 61156 kB' 'SUnreclaim: 70696 kB' 'KernelStack: 6232 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.585 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.586 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8124724 kB' 'MemAvailable: 9506272 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 448068 kB' 'Inactive: 1267048 kB' 'Active(anon): 127208 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118356 kB' 'Mapped: 47888 kB' 'Shmem: 10464 kB' 'KReclaimable: 61156 kB' 'Slab: 131848 kB' 'SReclaimable: 61156 kB' 'SUnreclaim: 70692 kB' 'KernelStack: 6200 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.588 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8124872 kB' 'MemAvailable: 9506420 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 447868 kB' 'Inactive: 1267048 kB' 'Active(anon): 127008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118148 kB' 'Mapped: 47964 kB' 'Shmem: 10464 kB' 'KReclaimable: 61156 kB' 'Slab: 131848 kB' 'SReclaimable: 61156 kB' 'SUnreclaim: 70692 kB' 'KernelStack: 6152 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.590 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.591 nr_hugepages=1024 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.591 resv_hugepages=0 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.591 surplus_hugepages=0 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.591 anon_hugepages=0 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8124872 kB' 'MemAvailable: 9506420 kB' 'Buffers: 2436 kB' 'Cached: 1595936 kB' 'SwapCached: 0 kB' 'Active: 448116 kB' 'Inactive: 1267048 kB' 'Active(anon): 127256 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118400 kB' 'Mapped: 47964 kB' 'Shmem: 10464 kB' 'KReclaimable: 61156 kB' 'Slab: 131848 kB' 'SReclaimable: 61156 kB' 'SUnreclaim: 70692 kB' 'KernelStack: 6152 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.591 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.592 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8124872 kB' 'MemUsed: 4117108 kB' 'SwapCached: 0 kB' 'Active: 447976 kB' 'Inactive: 1267048 kB' 'Active(anon): 127116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1598372 kB' 'Mapped: 47828 kB' 'AnonPages: 118268 kB' 'Shmem: 10464 kB' 'KernelStack: 6160 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61156 kB' 'Slab: 131848 kB' 'SReclaimable: 61156 kB' 'SUnreclaim: 70692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.593 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.594 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.594 node0=1024 expecting 1024 00:04:39.595 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.595 13:47:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.595 00:04:39.595 real 0m1.097s 00:04:39.595 user 0m0.548s 00:04:39.595 sys 0m0.611s 00:04:39.595 13:47:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.595 13:47:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.595 ************************************ 00:04:39.595 END TEST no_shrink_alloc 00:04:39.595 ************************************ 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:39.595 13:47:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:39.595 00:04:39.595 real 0m4.876s 00:04:39.595 user 0m2.278s 00:04:39.595 sys 0m2.556s 00:04:39.595 13:47:28 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.595 13:47:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.595 ************************************ 00:04:39.595 END TEST hugepages 00:04:39.595 ************************************ 00:04:39.595 13:47:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:39.595 13:47:28 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.595 13:47:28 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.595 13:47:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.854 ************************************ 00:04:39.854 START TEST driver 00:04:39.854 ************************************ 00:04:39.854 13:47:28 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:39.854 * Looking for test storage... 00:04:39.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.854 13:47:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:39.854 13:47:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.854 13:47:28 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.421 13:47:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:40.421 13:47:29 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.421 13:47:29 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.421 13:47:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:40.421 ************************************ 00:04:40.421 START TEST guess_driver 00:04:40.421 ************************************ 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:40.421 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:40.422 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:40.422 Looking for driver=uio_pci_generic 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.422 13:47:29 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.990 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:40.990 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:40.990 13:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.248 13:47:30 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.815 00:04:41.815 real 0m1.441s 00:04:41.815 user 0m0.526s 00:04:41.815 sys 0m0.915s 00:04:41.815 13:47:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.815 ************************************ 00:04:41.815 13:47:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:41.815 END TEST guess_driver 00:04:41.815 ************************************ 00:04:41.815 00:04:41.815 real 0m2.165s 00:04:41.815 user 0m0.772s 00:04:41.815 sys 0m1.434s 00:04:41.815 13:47:30 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.815 13:47:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:41.815 ************************************ 00:04:41.815 END TEST driver 00:04:41.815 ************************************ 00:04:41.815 13:47:30 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:41.815 13:47:30 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.815 13:47:30 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.815 13:47:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:41.815 ************************************ 00:04:41.815 START TEST devices 00:04:41.815 ************************************ 00:04:41.815 13:47:30 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:42.073 * Looking for test storage... 00:04:42.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:42.073 13:47:30 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:42.073 13:47:30 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:42.073 13:47:30 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.073 13:47:30 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:42.641 13:47:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:42.641 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:42.641 13:47:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:42.641 13:47:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:42.900 No valid GPT data, bailing 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:42.900 No valid GPT data, bailing 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:42.900 No valid GPT data, bailing 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:42.900 13:47:31 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:42.900 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:42.900 13:47:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:43.159 No valid GPT data, bailing 00:04:43.159 13:47:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:43.159 13:47:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:43.159 13:47:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:43.159 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:43.159 13:47:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:43.159 13:47:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:43.159 13:47:31 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:43.159 13:47:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:43.159 13:47:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:43.159 13:47:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:43.159 13:47:31 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:43.159 13:47:31 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:43.159 13:47:31 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:43.159 13:47:31 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.159 13:47:31 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.159 13:47:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.159 ************************************ 00:04:43.159 START TEST nvme_mount 00:04:43.159 ************************************ 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.159 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.160 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.160 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.160 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:43.160 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:43.160 13:47:31 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:44.096 Creating new GPT entries in memory. 00:04:44.096 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.096 other utilities. 00:04:44.096 13:47:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.096 13:47:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.096 13:47:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.096 13:47:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.096 13:47:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:45.054 Creating new GPT entries in memory. 00:04:45.054 The operation has completed successfully. 00:04:45.054 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.054 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.054 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57006 00:04:45.054 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.054 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:45.054 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.054 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:45.054 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:45.313 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.571 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.829 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.829 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:45.829 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.829 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.829 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.087 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:46.087 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:46.087 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.087 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.087 13:47:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.346 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.346 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:46.346 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.346 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.346 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.346 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.346 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.346 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.605 13:47:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.864 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.864 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:46.864 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.864 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.864 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.864 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.123 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.123 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.123 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.123 13:47:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.123 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.123 00:04:47.123 real 0m4.087s 00:04:47.123 user 0m0.698s 00:04:47.123 sys 0m1.075s 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.123 13:47:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.123 ************************************ 00:04:47.123 END TEST nvme_mount 00:04:47.123 ************************************ 00:04:47.123 13:47:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:47.123 13:47:36 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.123 13:47:36 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.123 13:47:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.123 ************************************ 00:04:47.123 START TEST dm_mount 00:04:47.123 ************************************ 00:04:47.123 13:47:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.124 13:47:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:48.499 Creating new GPT entries in memory. 00:04:48.499 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.499 other utilities. 00:04:48.499 13:47:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.499 13:47:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.499 13:47:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.499 13:47:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.499 13:47:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:49.435 Creating new GPT entries in memory. 00:04:49.435 The operation has completed successfully. 00:04:49.435 13:47:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.435 13:47:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.435 13:47:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.435 13:47:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.435 13:47:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:50.372 The operation has completed successfully. 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57442 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.372 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.631 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.631 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:50.631 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.631 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.631 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.631 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.631 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.631 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.890 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.891 13:47:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.150 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:51.150 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:51.150 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:51.150 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.150 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:51.150 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.150 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:51.150 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:51.409 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:51.409 00:04:51.409 real 0m4.245s 00:04:51.409 user 0m0.493s 00:04:51.409 sys 0m0.712s 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.409 13:47:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 ************************************ 00:04:51.409 END TEST dm_mount 00:04:51.409 ************************************ 00:04:51.409 13:47:40 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:51.409 13:47:40 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:51.409 13:47:40 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.409 13:47:40 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.409 13:47:40 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:51.409 13:47:40 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.409 13:47:40 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.976 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:51.976 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:51.976 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.976 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.976 13:47:40 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.976 13:47:40 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.976 13:47:40 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.976 13:47:40 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.976 13:47:40 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.976 13:47:40 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.976 13:47:40 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.976 ************************************ 00:04:51.976 END TEST devices 00:04:51.976 ************************************ 00:04:51.976 00:04:51.976 real 0m9.901s 00:04:51.976 user 0m1.857s 00:04:51.976 sys 0m2.387s 00:04:51.976 13:47:40 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.976 13:47:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.976 ************************************ 00:04:51.976 END TEST setup.sh 00:04:51.976 ************************************ 00:04:51.976 00:04:51.976 real 0m22.025s 00:04:51.976 user 0m7.110s 00:04:51.976 sys 0m9.172s 00:04:51.976 13:47:40 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.976 13:47:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.976 13:47:40 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:52.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.568 Hugepages 00:04:52.568 node hugesize free / total 00:04:52.568 node0 1048576kB 0 / 0 00:04:52.568 node0 2048kB 2048 / 2048 00:04:52.568 00:04:52.568 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.568 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:52.826 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:52.826 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:52.826 13:47:41 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.826 13:47:41 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.826 13:47:41 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.826 13:47:41 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.653 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.653 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.653 13:47:42 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:54.588 13:47:43 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:54.588 13:47:43 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:54.588 13:47:43 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:54.588 13:47:43 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:54.588 13:47:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:54.588 13:47:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:54.588 13:47:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.588 13:47:43 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.588 13:47:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:54.846 13:47:43 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:54.846 13:47:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:54.846 13:47:43 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.104 Waiting for block devices as requested 00:04:55.104 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.361 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.361 13:47:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:55.361 13:47:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:55.361 13:47:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:55.361 13:47:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:55.361 13:47:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:55.361 13:47:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:55.361 13:47:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:55.361 13:47:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:55.361 13:47:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:55.361 13:47:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:55.361 13:47:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:55.361 13:47:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:55.361 13:47:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:55.361 13:47:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:55.361 13:47:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:55.361 13:47:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:55.361 13:47:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:55.361 13:47:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:55.361 13:47:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:55.361 13:47:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:55.361 13:47:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:55.361 13:47:44 -- common/autotest_common.sh@1557 -- # continue 00:04:55.361 13:47:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:55.361 13:47:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:55.361 13:47:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:55.361 13:47:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:55.361 13:47:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:55.361 13:47:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:55.361 13:47:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:55.361 13:47:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:55.361 13:47:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:55.361 13:47:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:55.361 13:47:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:55.361 13:47:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:55.361 13:47:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:55.361 13:47:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:55.361 13:47:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:55.361 13:47:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:55.361 13:47:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:55.361 13:47:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:55.361 13:47:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:55.361 13:47:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:55.361 13:47:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:55.361 13:47:44 -- common/autotest_common.sh@1557 -- # continue 00:04:55.361 13:47:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:55.361 13:47:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.361 13:47:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.361 13:47:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:55.361 13:47:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.361 13:47:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.361 13:47:44 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.296 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.296 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.296 13:47:45 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:56.296 13:47:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.296 13:47:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.296 13:47:45 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:56.296 13:47:45 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:56.296 13:47:45 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:56.296 13:47:45 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:56.296 13:47:45 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:56.296 13:47:45 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:56.296 13:47:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:56.296 13:47:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:56.296 13:47:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.296 13:47:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:56.296 13:47:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:56.296 13:47:45 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:56.296 13:47:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:56.296 13:47:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:56.296 13:47:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:56.296 13:47:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:56.296 13:47:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:56.296 13:47:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:56.296 13:47:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:56.296 13:47:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:56.296 13:47:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:56.296 13:47:45 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:56.554 13:47:45 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:56.554 13:47:45 -- common/autotest_common.sh@1593 -- # return 0 00:04:56.554 13:47:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:56.554 13:47:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:56.554 13:47:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:56.554 13:47:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:56.554 13:47:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:56.554 13:47:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.554 13:47:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.554 13:47:45 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:56.554 13:47:45 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:56.554 13:47:45 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:56.554 13:47:45 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:56.554 13:47:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.554 13:47:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.554 13:47:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.554 ************************************ 00:04:56.554 START TEST env 00:04:56.554 ************************************ 00:04:56.554 13:47:45 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:56.554 * Looking for test storage... 00:04:56.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:56.554 13:47:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:56.554 13:47:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.554 13:47:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.554 13:47:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.554 ************************************ 00:04:56.554 START TEST env_memory 00:04:56.554 ************************************ 00:04:56.554 13:47:45 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:56.554 00:04:56.554 00:04:56.554 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.554 http://cunit.sourceforge.net/ 00:04:56.554 00:04:56.554 00:04:56.554 Suite: memory 00:04:56.554 Test: alloc and free memory map ...[2024-07-25 13:47:45.485627] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:56.554 passed 00:04:56.555 Test: mem map translation ...[2024-07-25 13:47:45.516467] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:56.555 [2024-07-25 13:47:45.516516] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:56.555 [2024-07-25 13:47:45.516572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:56.555 [2024-07-25 13:47:45.516590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:56.555 passed 00:04:56.555 Test: mem map registration ...[2024-07-25 13:47:45.580288] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:56.555 [2024-07-25 13:47:45.580326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:56.814 passed 00:04:56.814 Test: mem map adjacent registrations ...passed 00:04:56.814 00:04:56.814 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.814 suites 1 1 n/a 0 0 00:04:56.814 tests 4 4 4 0 0 00:04:56.814 asserts 152 152 152 0 n/a 00:04:56.814 00:04:56.814 Elapsed time = 0.214 seconds 00:04:56.814 00:04:56.814 real 0m0.231s 00:04:56.814 user 0m0.216s 00:04:56.814 sys 0m0.011s 00:04:56.814 13:47:45 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.814 13:47:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:56.814 ************************************ 00:04:56.814 END TEST env_memory 00:04:56.814 ************************************ 00:04:56.814 13:47:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:56.814 13:47:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.814 13:47:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.814 13:47:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.814 ************************************ 00:04:56.814 START TEST env_vtophys 00:04:56.814 ************************************ 00:04:56.814 13:47:45 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:56.814 EAL: lib.eal log level changed from notice to debug 00:04:56.814 EAL: Detected lcore 0 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 1 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 2 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 3 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 4 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 5 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 6 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 7 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 8 as core 0 on socket 0 00:04:56.814 EAL: Detected lcore 9 as core 0 on socket 0 00:04:56.814 EAL: Maximum logical cores by configuration: 128 00:04:56.814 EAL: Detected CPU lcores: 10 00:04:56.814 EAL: Detected NUMA nodes: 1 00:04:56.814 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:56.814 EAL: Detected shared linkage of DPDK 00:04:56.814 EAL: No shared files mode enabled, IPC will be disabled 00:04:56.814 EAL: Selected IOVA mode 'PA' 00:04:56.814 EAL: Probing VFIO support... 00:04:56.814 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:56.814 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:56.814 EAL: Ask a virtual area of 0x2e000 bytes 00:04:56.814 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:56.814 EAL: Setting up physically contiguous memory... 00:04:56.814 EAL: Setting maximum number of open files to 524288 00:04:56.814 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:56.814 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:56.814 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.814 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:56.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.814 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.814 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:56.814 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:56.814 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.814 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:56.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.814 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.814 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:56.814 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:56.814 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.814 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:56.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.814 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.814 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:56.814 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:56.814 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.814 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:56.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.814 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.814 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:56.814 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:56.814 EAL: Hugepages will be freed exactly as allocated. 00:04:56.814 EAL: No shared files mode enabled, IPC is disabled 00:04:56.814 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: TSC frequency is ~2200000 KHz 00:04:57.073 EAL: Main lcore 0 is ready (tid=7f2799a3da00;cpuset=[0]) 00:04:57.073 EAL: Trying to obtain current memory policy. 00:04:57.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.073 EAL: Restoring previous memory policy: 0 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was expanded by 2MB 00:04:57.073 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:57.073 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:57.073 EAL: Mem event callback 'spdk:(nil)' registered 00:04:57.073 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:57.073 00:04:57.073 00:04:57.073 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.073 http://cunit.sourceforge.net/ 00:04:57.073 00:04:57.073 00:04:57.073 Suite: components_suite 00:04:57.073 Test: vtophys_malloc_test ...passed 00:04:57.073 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:57.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.073 EAL: Restoring previous memory policy: 4 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was expanded by 4MB 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was shrunk by 4MB 00:04:57.073 EAL: Trying to obtain current memory policy. 00:04:57.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.073 EAL: Restoring previous memory policy: 4 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was expanded by 6MB 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was shrunk by 6MB 00:04:57.073 EAL: Trying to obtain current memory policy. 00:04:57.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.073 EAL: Restoring previous memory policy: 4 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was expanded by 10MB 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was shrunk by 10MB 00:04:57.073 EAL: Trying to obtain current memory policy. 00:04:57.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.073 EAL: Restoring previous memory policy: 4 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was expanded by 18MB 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was shrunk by 18MB 00:04:57.073 EAL: Trying to obtain current memory policy. 00:04:57.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.073 EAL: Restoring previous memory policy: 4 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was expanded by 34MB 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was shrunk by 34MB 00:04:57.073 EAL: Trying to obtain current memory policy. 00:04:57.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.073 EAL: Restoring previous memory policy: 4 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was expanded by 66MB 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.073 EAL: No shared files mode enabled, IPC is disabled 00:04:57.073 EAL: Heap on socket 0 was shrunk by 66MB 00:04:57.073 EAL: Trying to obtain current memory policy. 00:04:57.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.073 EAL: Restoring previous memory policy: 4 00:04:57.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.073 EAL: request: mp_malloc_sync 00:04:57.074 EAL: No shared files mode enabled, IPC is disabled 00:04:57.074 EAL: Heap on socket 0 was expanded by 130MB 00:04:57.074 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.074 EAL: request: mp_malloc_sync 00:04:57.074 EAL: No shared files mode enabled, IPC is disabled 00:04:57.074 EAL: Heap on socket 0 was shrunk by 130MB 00:04:57.074 EAL: Trying to obtain current memory policy. 00:04:57.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.074 EAL: Restoring previous memory policy: 4 00:04:57.074 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.074 EAL: request: mp_malloc_sync 00:04:57.074 EAL: No shared files mode enabled, IPC is disabled 00:04:57.074 EAL: Heap on socket 0 was expanded by 258MB 00:04:57.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.332 EAL: request: mp_malloc_sync 00:04:57.332 EAL: No shared files mode enabled, IPC is disabled 00:04:57.332 EAL: Heap on socket 0 was shrunk by 258MB 00:04:57.332 EAL: Trying to obtain current memory policy. 00:04:57.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.332 EAL: Restoring previous memory policy: 4 00:04:57.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.332 EAL: request: mp_malloc_sync 00:04:57.332 EAL: No shared files mode enabled, IPC is disabled 00:04:57.332 EAL: Heap on socket 0 was expanded by 514MB 00:04:57.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.591 EAL: request: mp_malloc_sync 00:04:57.591 EAL: No shared files mode enabled, IPC is disabled 00:04:57.591 EAL: Heap on socket 0 was shrunk by 514MB 00:04:57.591 EAL: Trying to obtain current memory policy. 00:04:57.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.850 EAL: Restoring previous memory policy: 4 00:04:57.850 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.850 EAL: request: mp_malloc_sync 00:04:57.850 EAL: No shared files mode enabled, IPC is disabled 00:04:57.850 EAL: Heap on socket 0 was expanded by 1026MB 00:04:58.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.368 passed 00:04:58.368 00:04:58.368 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.368 suites 1 1 n/a 0 0 00:04:58.368 tests 2 2 2 0 0 00:04:58.368 asserts 5274 5274 5274 0 n/a 00:04:58.368 00:04:58.368 Elapsed time = 1.254 seconds 00:04:58.368 EAL: request: mp_malloc_sync 00:04:58.368 EAL: No shared files mode enabled, IPC is disabled 00:04:58.368 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:58.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.368 EAL: request: mp_malloc_sync 00:04:58.368 EAL: No shared files mode enabled, IPC is disabled 00:04:58.368 EAL: Heap on socket 0 was shrunk by 2MB 00:04:58.368 EAL: No shared files mode enabled, IPC is disabled 00:04:58.368 EAL: No shared files mode enabled, IPC is disabled 00:04:58.368 EAL: No shared files mode enabled, IPC is disabled 00:04:58.368 00:04:58.368 real 0m1.449s 00:04:58.368 user 0m0.798s 00:04:58.368 sys 0m0.519s 00:04:58.368 13:47:47 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.368 13:47:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:58.368 ************************************ 00:04:58.368 END TEST env_vtophys 00:04:58.368 ************************************ 00:04:58.368 13:47:47 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:58.368 13:47:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.368 13:47:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.368 13:47:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.368 ************************************ 00:04:58.368 START TEST env_pci 00:04:58.368 ************************************ 00:04:58.368 13:47:47 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:58.368 00:04:58.368 00:04:58.368 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.368 http://cunit.sourceforge.net/ 00:04:58.368 00:04:58.368 00:04:58.368 Suite: pci 00:04:58.368 Test: pci_hook ...[2024-07-25 13:47:47.232611] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58642 has claimed it 00:04:58.368 passed 00:04:58.368 00:04:58.368 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.368 suites 1 1 n/a 0 0 00:04:58.368 tests 1 1 1 0 0 00:04:58.368 asserts 25 25 25 0 n/a 00:04:58.368 00:04:58.368 Elapsed time = 0.002 seconds 00:04:58.368 EAL: Cannot find device (10000:00:01.0) 00:04:58.368 EAL: Failed to attach device on primary process 00:04:58.368 00:04:58.368 real 0m0.019s 00:04:58.368 user 0m0.009s 00:04:58.368 sys 0m0.009s 00:04:58.368 13:47:47 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.368 13:47:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:58.368 ************************************ 00:04:58.368 END TEST env_pci 00:04:58.368 ************************************ 00:04:58.368 13:47:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:58.368 13:47:47 env -- env/env.sh@15 -- # uname 00:04:58.368 13:47:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:58.368 13:47:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:58.368 13:47:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.368 13:47:47 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:58.368 13:47:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.368 13:47:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.368 ************************************ 00:04:58.368 START TEST env_dpdk_post_init 00:04:58.368 ************************************ 00:04:58.368 13:47:47 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.368 EAL: Detected CPU lcores: 10 00:04:58.368 EAL: Detected NUMA nodes: 1 00:04:58.368 EAL: Detected shared linkage of DPDK 00:04:58.368 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.368 EAL: Selected IOVA mode 'PA' 00:04:58.626 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.626 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:58.626 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:58.626 Starting DPDK initialization... 00:04:58.626 Starting SPDK post initialization... 00:04:58.626 SPDK NVMe probe 00:04:58.626 Attaching to 0000:00:10.0 00:04:58.626 Attaching to 0000:00:11.0 00:04:58.626 Attached to 0000:00:10.0 00:04:58.626 Attached to 0000:00:11.0 00:04:58.626 Cleaning up... 00:04:58.626 00:04:58.626 real 0m0.175s 00:04:58.626 user 0m0.040s 00:04:58.626 sys 0m0.035s 00:04:58.626 13:47:47 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.626 ************************************ 00:04:58.626 END TEST env_dpdk_post_init 00:04:58.626 13:47:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.626 ************************************ 00:04:58.626 13:47:47 env -- env/env.sh@26 -- # uname 00:04:58.626 13:47:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:58.626 13:47:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.626 13:47:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.626 13:47:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.626 13:47:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.626 ************************************ 00:04:58.626 START TEST env_mem_callbacks 00:04:58.626 ************************************ 00:04:58.626 13:47:47 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.626 EAL: Detected CPU lcores: 10 00:04:58.627 EAL: Detected NUMA nodes: 1 00:04:58.627 EAL: Detected shared linkage of DPDK 00:04:58.627 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.627 EAL: Selected IOVA mode 'PA' 00:04:58.884 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.884 00:04:58.884 00:04:58.884 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.884 http://cunit.sourceforge.net/ 00:04:58.884 00:04:58.884 00:04:58.884 Suite: memory 00:04:58.884 Test: test ... 00:04:58.884 register 0x200000200000 2097152 00:04:58.884 malloc 3145728 00:04:58.884 register 0x200000400000 4194304 00:04:58.884 buf 0x200000500000 len 3145728 PASSED 00:04:58.884 malloc 64 00:04:58.884 buf 0x2000004fff40 len 64 PASSED 00:04:58.884 malloc 4194304 00:04:58.884 register 0x200000800000 6291456 00:04:58.884 buf 0x200000a00000 len 4194304 PASSED 00:04:58.884 free 0x200000500000 3145728 00:04:58.884 free 0x2000004fff40 64 00:04:58.884 unregister 0x200000400000 4194304 PASSED 00:04:58.884 free 0x200000a00000 4194304 00:04:58.884 unregister 0x200000800000 6291456 PASSED 00:04:58.884 malloc 8388608 00:04:58.884 register 0x200000400000 10485760 00:04:58.884 buf 0x200000600000 len 8388608 PASSED 00:04:58.884 free 0x200000600000 8388608 00:04:58.884 unregister 0x200000400000 10485760 PASSED 00:04:58.884 passed 00:04:58.884 00:04:58.884 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.884 suites 1 1 n/a 0 0 00:04:58.884 tests 1 1 1 0 0 00:04:58.884 asserts 15 15 15 0 n/a 00:04:58.884 00:04:58.884 Elapsed time = 0.008 seconds 00:04:58.884 00:04:58.884 real 0m0.144s 00:04:58.884 user 0m0.019s 00:04:58.884 sys 0m0.024s 00:04:58.884 13:47:47 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.884 ************************************ 00:04:58.884 END TEST env_mem_callbacks 00:04:58.884 13:47:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:58.884 ************************************ 00:04:58.884 00:04:58.884 real 0m2.369s 00:04:58.884 user 0m1.207s 00:04:58.884 sys 0m0.810s 00:04:58.884 13:47:47 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.884 13:47:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.884 ************************************ 00:04:58.884 END TEST env 00:04:58.884 ************************************ 00:04:58.884 13:47:47 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:58.884 13:47:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.884 13:47:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.884 13:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:58.884 ************************************ 00:04:58.884 START TEST rpc 00:04:58.884 ************************************ 00:04:58.884 13:47:47 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:58.884 * Looking for test storage... 00:04:58.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.884 13:47:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58751 00:04:58.884 13:47:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:58.884 13:47:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.884 13:47:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58751 00:04:58.884 13:47:47 rpc -- common/autotest_common.sh@831 -- # '[' -z 58751 ']' 00:04:58.884 13:47:47 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.884 13:47:47 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.884 13:47:47 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.884 13:47:47 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.884 13:47:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.884 [2024-07-25 13:47:47.899524] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:04:58.884 [2024-07-25 13:47:47.899665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58751 ] 00:04:59.164 [2024-07-25 13:47:48.031023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.164 [2024-07-25 13:47:48.120666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.164 [2024-07-25 13:47:48.120783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58751' to capture a snapshot of events at runtime. 00:04:59.164 [2024-07-25 13:47:48.120819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.164 [2024-07-25 13:47:48.120828] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.164 [2024-07-25 13:47:48.120842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58751 for offline analysis/debug. 00:04:59.164 [2024-07-25 13:47:48.120869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.164 [2024-07-25 13:47:48.177871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.097 13:47:48 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.097 13:47:48 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:00.097 13:47:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.097 13:47:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.097 13:47:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:00.097 13:47:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:00.097 13:47:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.097 13:47:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.097 13:47:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.097 ************************************ 00:05:00.097 START TEST rpc_integrity 00:05:00.097 ************************************ 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:00.097 13:47:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.097 13:47:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.097 13:47:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.097 13:47:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.097 13:47:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.097 13:47:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.097 13:47:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.097 13:47:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.097 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.097 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.097 { 00:05:00.097 "name": "Malloc0", 00:05:00.097 "aliases": [ 00:05:00.097 "60d95cb3-31e0-4798-b317-50eb9a653b51" 00:05:00.097 ], 00:05:00.097 "product_name": "Malloc disk", 00:05:00.097 "block_size": 512, 00:05:00.097 "num_blocks": 16384, 00:05:00.097 "uuid": "60d95cb3-31e0-4798-b317-50eb9a653b51", 00:05:00.097 "assigned_rate_limits": { 00:05:00.097 "rw_ios_per_sec": 0, 00:05:00.097 "rw_mbytes_per_sec": 0, 00:05:00.097 "r_mbytes_per_sec": 0, 00:05:00.097 "w_mbytes_per_sec": 0 00:05:00.097 }, 00:05:00.097 "claimed": false, 00:05:00.097 "zoned": false, 00:05:00.097 "supported_io_types": { 00:05:00.097 "read": true, 00:05:00.097 "write": true, 00:05:00.097 "unmap": true, 00:05:00.097 "flush": true, 00:05:00.097 "reset": true, 00:05:00.097 "nvme_admin": false, 00:05:00.097 "nvme_io": false, 00:05:00.097 "nvme_io_md": false, 00:05:00.097 "write_zeroes": true, 00:05:00.097 "zcopy": true, 00:05:00.097 "get_zone_info": false, 00:05:00.097 "zone_management": false, 00:05:00.097 "zone_append": false, 00:05:00.097 "compare": false, 00:05:00.097 "compare_and_write": false, 00:05:00.097 "abort": true, 00:05:00.097 "seek_hole": false, 00:05:00.097 "seek_data": false, 00:05:00.097 "copy": true, 00:05:00.097 "nvme_iov_md": false 00:05:00.097 }, 00:05:00.097 "memory_domains": [ 00:05:00.097 { 00:05:00.097 "dma_device_id": "system", 00:05:00.097 "dma_device_type": 1 00:05:00.097 }, 00:05:00.097 { 00:05:00.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.097 "dma_device_type": 2 00:05:00.097 } 00:05:00.097 ], 00:05:00.097 "driver_specific": {} 00:05:00.097 } 00:05:00.097 ]' 00:05:00.097 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.097 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.097 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:00.097 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.097 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.097 [2024-07-25 13:47:49.059104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:00.097 [2024-07-25 13:47:49.059174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.097 [2024-07-25 13:47:49.059191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1555da0 00:05:00.097 [2024-07-25 13:47:49.059200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.097 [2024-07-25 13:47:49.060860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.097 [2024-07-25 13:47:49.060894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.097 Passthru0 00:05:00.097 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.097 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.097 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.097 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.097 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.097 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.097 { 00:05:00.097 "name": "Malloc0", 00:05:00.097 "aliases": [ 00:05:00.097 "60d95cb3-31e0-4798-b317-50eb9a653b51" 00:05:00.097 ], 00:05:00.097 "product_name": "Malloc disk", 00:05:00.097 "block_size": 512, 00:05:00.097 "num_blocks": 16384, 00:05:00.097 "uuid": "60d95cb3-31e0-4798-b317-50eb9a653b51", 00:05:00.097 "assigned_rate_limits": { 00:05:00.097 "rw_ios_per_sec": 0, 00:05:00.097 "rw_mbytes_per_sec": 0, 00:05:00.097 "r_mbytes_per_sec": 0, 00:05:00.097 "w_mbytes_per_sec": 0 00:05:00.097 }, 00:05:00.097 "claimed": true, 00:05:00.097 "claim_type": "exclusive_write", 00:05:00.097 "zoned": false, 00:05:00.097 "supported_io_types": { 00:05:00.097 "read": true, 00:05:00.097 "write": true, 00:05:00.097 "unmap": true, 00:05:00.097 "flush": true, 00:05:00.097 "reset": true, 00:05:00.097 "nvme_admin": false, 00:05:00.097 "nvme_io": false, 00:05:00.097 "nvme_io_md": false, 00:05:00.097 "write_zeroes": true, 00:05:00.097 "zcopy": true, 00:05:00.097 "get_zone_info": false, 00:05:00.097 "zone_management": false, 00:05:00.097 "zone_append": false, 00:05:00.097 "compare": false, 00:05:00.097 "compare_and_write": false, 00:05:00.097 "abort": true, 00:05:00.097 "seek_hole": false, 00:05:00.097 "seek_data": false, 00:05:00.097 "copy": true, 00:05:00.097 "nvme_iov_md": false 00:05:00.097 }, 00:05:00.097 "memory_domains": [ 00:05:00.097 { 00:05:00.097 "dma_device_id": "system", 00:05:00.097 "dma_device_type": 1 00:05:00.097 }, 00:05:00.097 { 00:05:00.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.097 "dma_device_type": 2 00:05:00.097 } 00:05:00.097 ], 00:05:00.097 "driver_specific": {} 00:05:00.097 }, 00:05:00.097 { 00:05:00.097 "name": "Passthru0", 00:05:00.097 "aliases": [ 00:05:00.097 "cf73dd5b-f2dc-5f4f-aeb8-59aa9761a7ec" 00:05:00.097 ], 00:05:00.097 "product_name": "passthru", 00:05:00.097 "block_size": 512, 00:05:00.097 "num_blocks": 16384, 00:05:00.097 "uuid": "cf73dd5b-f2dc-5f4f-aeb8-59aa9761a7ec", 00:05:00.098 "assigned_rate_limits": { 00:05:00.098 "rw_ios_per_sec": 0, 00:05:00.098 "rw_mbytes_per_sec": 0, 00:05:00.098 "r_mbytes_per_sec": 0, 00:05:00.098 "w_mbytes_per_sec": 0 00:05:00.098 }, 00:05:00.098 "claimed": false, 00:05:00.098 "zoned": false, 00:05:00.098 "supported_io_types": { 00:05:00.098 "read": true, 00:05:00.098 "write": true, 00:05:00.098 "unmap": true, 00:05:00.098 "flush": true, 00:05:00.098 "reset": true, 00:05:00.098 "nvme_admin": false, 00:05:00.098 "nvme_io": false, 00:05:00.098 "nvme_io_md": false, 00:05:00.098 "write_zeroes": true, 00:05:00.098 "zcopy": true, 00:05:00.098 "get_zone_info": false, 00:05:00.098 "zone_management": false, 00:05:00.098 "zone_append": false, 00:05:00.098 "compare": false, 00:05:00.098 "compare_and_write": false, 00:05:00.098 "abort": true, 00:05:00.098 "seek_hole": false, 00:05:00.098 "seek_data": false, 00:05:00.098 "copy": true, 00:05:00.098 "nvme_iov_md": false 00:05:00.098 }, 00:05:00.098 "memory_domains": [ 00:05:00.098 { 00:05:00.098 "dma_device_id": "system", 00:05:00.098 "dma_device_type": 1 00:05:00.098 }, 00:05:00.098 { 00:05:00.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.098 "dma_device_type": 2 00:05:00.098 } 00:05:00.098 ], 00:05:00.098 "driver_specific": { 00:05:00.098 "passthru": { 00:05:00.098 "name": "Passthru0", 00:05:00.098 "base_bdev_name": "Malloc0" 00:05:00.098 } 00:05:00.098 } 00:05:00.098 } 00:05:00.098 ]' 00:05:00.098 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.356 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.356 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.356 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.356 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.356 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.356 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.356 13:47:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.356 00:05:00.356 real 0m0.310s 00:05:00.356 user 0m0.208s 00:05:00.356 sys 0m0.030s 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.356 13:47:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 ************************************ 00:05:00.356 END TEST rpc_integrity 00:05:00.356 ************************************ 00:05:00.356 13:47:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.356 13:47:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.356 13:47:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.356 13:47:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 ************************************ 00:05:00.356 START TEST rpc_plugins 00:05:00.356 ************************************ 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.356 { 00:05:00.356 "name": "Malloc1", 00:05:00.356 "aliases": [ 00:05:00.356 "ee6ccd7d-bdf0-4214-93fb-e441cfd838e6" 00:05:00.356 ], 00:05:00.356 "product_name": "Malloc disk", 00:05:00.356 "block_size": 4096, 00:05:00.356 "num_blocks": 256, 00:05:00.356 "uuid": "ee6ccd7d-bdf0-4214-93fb-e441cfd838e6", 00:05:00.356 "assigned_rate_limits": { 00:05:00.356 "rw_ios_per_sec": 0, 00:05:00.356 "rw_mbytes_per_sec": 0, 00:05:00.356 "r_mbytes_per_sec": 0, 00:05:00.356 "w_mbytes_per_sec": 0 00:05:00.356 }, 00:05:00.356 "claimed": false, 00:05:00.356 "zoned": false, 00:05:00.356 "supported_io_types": { 00:05:00.356 "read": true, 00:05:00.356 "write": true, 00:05:00.356 "unmap": true, 00:05:00.356 "flush": true, 00:05:00.356 "reset": true, 00:05:00.356 "nvme_admin": false, 00:05:00.356 "nvme_io": false, 00:05:00.356 "nvme_io_md": false, 00:05:00.356 "write_zeroes": true, 00:05:00.356 "zcopy": true, 00:05:00.356 "get_zone_info": false, 00:05:00.356 "zone_management": false, 00:05:00.356 "zone_append": false, 00:05:00.356 "compare": false, 00:05:00.356 "compare_and_write": false, 00:05:00.356 "abort": true, 00:05:00.356 "seek_hole": false, 00:05:00.356 "seek_data": false, 00:05:00.356 "copy": true, 00:05:00.356 "nvme_iov_md": false 00:05:00.356 }, 00:05:00.356 "memory_domains": [ 00:05:00.356 { 00:05:00.356 "dma_device_id": "system", 00:05:00.356 "dma_device_type": 1 00:05:00.356 }, 00:05:00.356 { 00:05:00.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.356 "dma_device_type": 2 00:05:00.356 } 00:05:00.356 ], 00:05:00.356 "driver_specific": {} 00:05:00.356 } 00:05:00.356 ]' 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.356 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.356 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.614 13:47:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.614 00:05:00.614 real 0m0.151s 00:05:00.614 user 0m0.100s 00:05:00.614 sys 0m0.016s 00:05:00.614 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.614 ************************************ 00:05:00.614 END TEST rpc_plugins 00:05:00.614 13:47:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.614 ************************************ 00:05:00.614 13:47:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.614 13:47:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.614 13:47:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.614 13:47:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.614 ************************************ 00:05:00.614 START TEST rpc_trace_cmd_test 00:05:00.614 ************************************ 00:05:00.614 13:47:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:00.614 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:00.614 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.614 13:47:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.614 13:47:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.614 13:47:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.614 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:00.614 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58751", 00:05:00.614 "tpoint_group_mask": "0x8", 00:05:00.614 "iscsi_conn": { 00:05:00.614 "mask": "0x2", 00:05:00.614 "tpoint_mask": "0x0" 00:05:00.614 }, 00:05:00.614 "scsi": { 00:05:00.614 "mask": "0x4", 00:05:00.614 "tpoint_mask": "0x0" 00:05:00.614 }, 00:05:00.614 "bdev": { 00:05:00.614 "mask": "0x8", 00:05:00.614 "tpoint_mask": "0xffffffffffffffff" 00:05:00.614 }, 00:05:00.614 "nvmf_rdma": { 00:05:00.614 "mask": "0x10", 00:05:00.614 "tpoint_mask": "0x0" 00:05:00.614 }, 00:05:00.614 "nvmf_tcp": { 00:05:00.614 "mask": "0x20", 00:05:00.614 "tpoint_mask": "0x0" 00:05:00.614 }, 00:05:00.614 "ftl": { 00:05:00.614 "mask": "0x40", 00:05:00.614 "tpoint_mask": "0x0" 00:05:00.614 }, 00:05:00.614 "blobfs": { 00:05:00.615 "mask": "0x80", 00:05:00.615 "tpoint_mask": "0x0" 00:05:00.615 }, 00:05:00.615 "dsa": { 00:05:00.615 "mask": "0x200", 00:05:00.615 "tpoint_mask": "0x0" 00:05:00.615 }, 00:05:00.615 "thread": { 00:05:00.615 "mask": "0x400", 00:05:00.615 "tpoint_mask": "0x0" 00:05:00.615 }, 00:05:00.615 "nvme_pcie": { 00:05:00.615 "mask": "0x800", 00:05:00.615 "tpoint_mask": "0x0" 00:05:00.615 }, 00:05:00.615 "iaa": { 00:05:00.615 "mask": "0x1000", 00:05:00.615 "tpoint_mask": "0x0" 00:05:00.615 }, 00:05:00.615 "nvme_tcp": { 00:05:00.615 "mask": "0x2000", 00:05:00.615 "tpoint_mask": "0x0" 00:05:00.615 }, 00:05:00.615 "bdev_nvme": { 00:05:00.615 "mask": "0x4000", 00:05:00.615 "tpoint_mask": "0x0" 00:05:00.615 }, 00:05:00.615 "sock": { 00:05:00.615 "mask": "0x8000", 00:05:00.615 "tpoint_mask": "0x0" 00:05:00.615 } 00:05:00.615 }' 00:05:00.615 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:00.615 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:00.615 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:00.615 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:00.615 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.615 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.615 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:00.873 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:00.873 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:00.873 13:47:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:00.873 00:05:00.873 real 0m0.272s 00:05:00.873 user 0m0.234s 00:05:00.873 sys 0m0.028s 00:05:00.873 13:47:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.873 ************************************ 00:05:00.873 END TEST rpc_trace_cmd_test 00:05:00.873 13:47:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.873 ************************************ 00:05:00.873 13:47:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:00.873 13:47:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:00.873 13:47:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:00.873 13:47:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.873 13:47:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.873 13:47:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.873 ************************************ 00:05:00.873 START TEST rpc_daemon_integrity 00:05:00.873 ************************************ 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.873 { 00:05:00.873 "name": "Malloc2", 00:05:00.873 "aliases": [ 00:05:00.873 "e7233cfa-8543-4b12-9e23-1240d461e2f2" 00:05:00.873 ], 00:05:00.873 "product_name": "Malloc disk", 00:05:00.873 "block_size": 512, 00:05:00.873 "num_blocks": 16384, 00:05:00.873 "uuid": "e7233cfa-8543-4b12-9e23-1240d461e2f2", 00:05:00.873 "assigned_rate_limits": { 00:05:00.873 "rw_ios_per_sec": 0, 00:05:00.873 "rw_mbytes_per_sec": 0, 00:05:00.873 "r_mbytes_per_sec": 0, 00:05:00.873 "w_mbytes_per_sec": 0 00:05:00.873 }, 00:05:00.873 "claimed": false, 00:05:00.873 "zoned": false, 00:05:00.873 "supported_io_types": { 00:05:00.873 "read": true, 00:05:00.873 "write": true, 00:05:00.873 "unmap": true, 00:05:00.873 "flush": true, 00:05:00.873 "reset": true, 00:05:00.873 "nvme_admin": false, 00:05:00.873 "nvme_io": false, 00:05:00.873 "nvme_io_md": false, 00:05:00.873 "write_zeroes": true, 00:05:00.873 "zcopy": true, 00:05:00.873 "get_zone_info": false, 00:05:00.873 "zone_management": false, 00:05:00.873 "zone_append": false, 00:05:00.873 "compare": false, 00:05:00.873 "compare_and_write": false, 00:05:00.873 "abort": true, 00:05:00.873 "seek_hole": false, 00:05:00.873 "seek_data": false, 00:05:00.873 "copy": true, 00:05:00.873 "nvme_iov_md": false 00:05:00.873 }, 00:05:00.873 "memory_domains": [ 00:05:00.873 { 00:05:00.873 "dma_device_id": "system", 00:05:00.873 "dma_device_type": 1 00:05:00.873 }, 00:05:00.873 { 00:05:00.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.873 "dma_device_type": 2 00:05:00.873 } 00:05:00.873 ], 00:05:00.873 "driver_specific": {} 00:05:00.873 } 00:05:00.873 ]' 00:05:00.873 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.132 [2024-07-25 13:47:49.943746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:01.132 [2024-07-25 13:47:49.943819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.132 [2024-07-25 13:47:49.943838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15babe0 00:05:01.132 [2024-07-25 13:47:49.943846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.132 [2024-07-25 13:47:49.945358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.132 [2024-07-25 13:47:49.945433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.132 Passthru0 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.132 { 00:05:01.132 "name": "Malloc2", 00:05:01.132 "aliases": [ 00:05:01.132 "e7233cfa-8543-4b12-9e23-1240d461e2f2" 00:05:01.132 ], 00:05:01.132 "product_name": "Malloc disk", 00:05:01.132 "block_size": 512, 00:05:01.132 "num_blocks": 16384, 00:05:01.132 "uuid": "e7233cfa-8543-4b12-9e23-1240d461e2f2", 00:05:01.132 "assigned_rate_limits": { 00:05:01.132 "rw_ios_per_sec": 0, 00:05:01.132 "rw_mbytes_per_sec": 0, 00:05:01.132 "r_mbytes_per_sec": 0, 00:05:01.132 "w_mbytes_per_sec": 0 00:05:01.132 }, 00:05:01.132 "claimed": true, 00:05:01.132 "claim_type": "exclusive_write", 00:05:01.132 "zoned": false, 00:05:01.132 "supported_io_types": { 00:05:01.132 "read": true, 00:05:01.132 "write": true, 00:05:01.132 "unmap": true, 00:05:01.132 "flush": true, 00:05:01.132 "reset": true, 00:05:01.132 "nvme_admin": false, 00:05:01.132 "nvme_io": false, 00:05:01.132 "nvme_io_md": false, 00:05:01.132 "write_zeroes": true, 00:05:01.132 "zcopy": true, 00:05:01.132 "get_zone_info": false, 00:05:01.132 "zone_management": false, 00:05:01.132 "zone_append": false, 00:05:01.132 "compare": false, 00:05:01.132 "compare_and_write": false, 00:05:01.132 "abort": true, 00:05:01.132 "seek_hole": false, 00:05:01.132 "seek_data": false, 00:05:01.132 "copy": true, 00:05:01.132 "nvme_iov_md": false 00:05:01.132 }, 00:05:01.132 "memory_domains": [ 00:05:01.132 { 00:05:01.132 "dma_device_id": "system", 00:05:01.132 "dma_device_type": 1 00:05:01.132 }, 00:05:01.132 { 00:05:01.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.132 "dma_device_type": 2 00:05:01.132 } 00:05:01.132 ], 00:05:01.132 "driver_specific": {} 00:05:01.132 }, 00:05:01.132 { 00:05:01.132 "name": "Passthru0", 00:05:01.132 "aliases": [ 00:05:01.132 "0faf892f-fda4-5857-8ae9-88ea15f08400" 00:05:01.132 ], 00:05:01.132 "product_name": "passthru", 00:05:01.132 "block_size": 512, 00:05:01.132 "num_blocks": 16384, 00:05:01.132 "uuid": "0faf892f-fda4-5857-8ae9-88ea15f08400", 00:05:01.132 "assigned_rate_limits": { 00:05:01.132 "rw_ios_per_sec": 0, 00:05:01.132 "rw_mbytes_per_sec": 0, 00:05:01.132 "r_mbytes_per_sec": 0, 00:05:01.132 "w_mbytes_per_sec": 0 00:05:01.132 }, 00:05:01.132 "claimed": false, 00:05:01.132 "zoned": false, 00:05:01.132 "supported_io_types": { 00:05:01.132 "read": true, 00:05:01.132 "write": true, 00:05:01.132 "unmap": true, 00:05:01.132 "flush": true, 00:05:01.132 "reset": true, 00:05:01.132 "nvme_admin": false, 00:05:01.132 "nvme_io": false, 00:05:01.132 "nvme_io_md": false, 00:05:01.132 "write_zeroes": true, 00:05:01.132 "zcopy": true, 00:05:01.132 "get_zone_info": false, 00:05:01.132 "zone_management": false, 00:05:01.132 "zone_append": false, 00:05:01.132 "compare": false, 00:05:01.132 "compare_and_write": false, 00:05:01.132 "abort": true, 00:05:01.132 "seek_hole": false, 00:05:01.132 "seek_data": false, 00:05:01.132 "copy": true, 00:05:01.132 "nvme_iov_md": false 00:05:01.132 }, 00:05:01.132 "memory_domains": [ 00:05:01.132 { 00:05:01.132 "dma_device_id": "system", 00:05:01.132 "dma_device_type": 1 00:05:01.132 }, 00:05:01.132 { 00:05:01.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.132 "dma_device_type": 2 00:05:01.132 } 00:05:01.132 ], 00:05:01.132 "driver_specific": { 00:05:01.132 "passthru": { 00:05:01.132 "name": "Passthru0", 00:05:01.132 "base_bdev_name": "Malloc2" 00:05:01.132 } 00:05:01.132 } 00:05:01.132 } 00:05:01.132 ]' 00:05:01.132 13:47:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.132 00:05:01.132 real 0m0.314s 00:05:01.132 user 0m0.210s 00:05:01.132 sys 0m0.042s 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.132 13:47:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.132 ************************************ 00:05:01.132 END TEST rpc_daemon_integrity 00:05:01.132 ************************************ 00:05:01.132 13:47:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.132 13:47:50 rpc -- rpc/rpc.sh@84 -- # killprocess 58751 00:05:01.132 13:47:50 rpc -- common/autotest_common.sh@950 -- # '[' -z 58751 ']' 00:05:01.132 13:47:50 rpc -- common/autotest_common.sh@954 -- # kill -0 58751 00:05:01.132 13:47:50 rpc -- common/autotest_common.sh@955 -- # uname 00:05:01.132 13:47:50 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.390 13:47:50 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58751 00:05:01.390 13:47:50 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.390 13:47:50 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.390 killing process with pid 58751 00:05:01.390 13:47:50 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58751' 00:05:01.390 13:47:50 rpc -- common/autotest_common.sh@969 -- # kill 58751 00:05:01.390 13:47:50 rpc -- common/autotest_common.sh@974 -- # wait 58751 00:05:01.647 00:05:01.647 real 0m2.824s 00:05:01.647 user 0m3.636s 00:05:01.647 sys 0m0.695s 00:05:01.647 13:47:50 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.647 13:47:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.647 ************************************ 00:05:01.647 END TEST rpc 00:05:01.647 ************************************ 00:05:01.647 13:47:50 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:01.647 13:47:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.647 13:47:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.647 13:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.647 ************************************ 00:05:01.647 START TEST skip_rpc 00:05:01.647 ************************************ 00:05:01.647 13:47:50 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:01.906 * Looking for test storage... 00:05:01.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.906 13:47:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.906 13:47:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.906 13:47:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.906 13:47:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.906 13:47:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.906 13:47:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.906 ************************************ 00:05:01.906 START TEST skip_rpc 00:05:01.906 ************************************ 00:05:01.906 13:47:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:01.906 13:47:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58944 00:05:01.906 13:47:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:01.906 13:47:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.906 13:47:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:01.906 [2024-07-25 13:47:50.797042] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:01.906 [2024-07-25 13:47:50.797153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58944 ] 00:05:01.906 [2024-07-25 13:47:50.933869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.165 [2024-07-25 13:47:51.037080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.165 [2024-07-25 13:47:51.097813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58944 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58944 ']' 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58944 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58944 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.431 killing process with pid 58944 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58944' 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58944 00:05:07.431 13:47:55 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58944 00:05:07.431 00:05:07.431 real 0m5.431s 00:05:07.431 user 0m5.046s 00:05:07.431 sys 0m0.290s 00:05:07.431 ************************************ 00:05:07.431 END TEST skip_rpc 00:05:07.431 ************************************ 00:05:07.431 13:47:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.431 13:47:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.431 13:47:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.431 13:47:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.431 13:47:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.431 13:47:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.431 ************************************ 00:05:07.431 START TEST skip_rpc_with_json 00:05:07.431 ************************************ 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59036 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59036 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59036 ']' 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.431 13:47:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.431 [2024-07-25 13:47:56.274940] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:07.431 [2024-07-25 13:47:56.275052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59036 ] 00:05:07.431 [2024-07-25 13:47:56.415494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.690 [2024-07-25 13:47:56.512539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.690 [2024-07-25 13:47:56.570999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.258 [2024-07-25 13:47:57.212026] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:08.258 request: 00:05:08.258 { 00:05:08.258 "trtype": "tcp", 00:05:08.258 "method": "nvmf_get_transports", 00:05:08.258 "req_id": 1 00:05:08.258 } 00:05:08.258 Got JSON-RPC error response 00:05:08.258 response: 00:05:08.258 { 00:05:08.258 "code": -19, 00:05:08.258 "message": "No such device" 00:05:08.258 } 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.258 [2024-07-25 13:47:57.224118] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.258 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.517 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.517 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:08.517 { 00:05:08.517 "subsystems": [ 00:05:08.517 { 00:05:08.517 "subsystem": "keyring", 00:05:08.517 "config": [] 00:05:08.517 }, 00:05:08.517 { 00:05:08.517 "subsystem": "iobuf", 00:05:08.517 "config": [ 00:05:08.517 { 00:05:08.517 "method": "iobuf_set_options", 00:05:08.517 "params": { 00:05:08.517 "small_pool_count": 8192, 00:05:08.517 "large_pool_count": 1024, 00:05:08.517 "small_bufsize": 8192, 00:05:08.517 "large_bufsize": 135168 00:05:08.517 } 00:05:08.517 } 00:05:08.517 ] 00:05:08.517 }, 00:05:08.517 { 00:05:08.517 "subsystem": "sock", 00:05:08.517 "config": [ 00:05:08.517 { 00:05:08.517 "method": "sock_set_default_impl", 00:05:08.517 "params": { 00:05:08.517 "impl_name": "uring" 00:05:08.517 } 00:05:08.517 }, 00:05:08.517 { 00:05:08.517 "method": "sock_impl_set_options", 00:05:08.517 "params": { 00:05:08.517 "impl_name": "ssl", 00:05:08.517 "recv_buf_size": 4096, 00:05:08.517 "send_buf_size": 4096, 00:05:08.517 "enable_recv_pipe": true, 00:05:08.517 "enable_quickack": false, 00:05:08.517 "enable_placement_id": 0, 00:05:08.517 "enable_zerocopy_send_server": true, 00:05:08.517 "enable_zerocopy_send_client": false, 00:05:08.517 "zerocopy_threshold": 0, 00:05:08.517 "tls_version": 0, 00:05:08.517 "enable_ktls": false 00:05:08.517 } 00:05:08.517 }, 00:05:08.517 { 00:05:08.517 "method": "sock_impl_set_options", 00:05:08.517 "params": { 00:05:08.517 "impl_name": "posix", 00:05:08.517 "recv_buf_size": 2097152, 00:05:08.517 "send_buf_size": 2097152, 00:05:08.517 "enable_recv_pipe": true, 00:05:08.517 "enable_quickack": false, 00:05:08.517 "enable_placement_id": 0, 00:05:08.517 "enable_zerocopy_send_server": true, 00:05:08.517 "enable_zerocopy_send_client": false, 00:05:08.517 "zerocopy_threshold": 0, 00:05:08.517 "tls_version": 0, 00:05:08.517 "enable_ktls": false 00:05:08.517 } 00:05:08.517 }, 00:05:08.517 { 00:05:08.517 "method": "sock_impl_set_options", 00:05:08.517 "params": { 00:05:08.517 "impl_name": "uring", 00:05:08.517 "recv_buf_size": 2097152, 00:05:08.517 "send_buf_size": 2097152, 00:05:08.517 "enable_recv_pipe": true, 00:05:08.517 "enable_quickack": false, 00:05:08.517 "enable_placement_id": 0, 00:05:08.517 "enable_zerocopy_send_server": false, 00:05:08.517 "enable_zerocopy_send_client": false, 00:05:08.517 "zerocopy_threshold": 0, 00:05:08.517 "tls_version": 0, 00:05:08.518 "enable_ktls": false 00:05:08.518 } 00:05:08.518 } 00:05:08.518 ] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "vmd", 00:05:08.518 "config": [] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "accel", 00:05:08.518 "config": [ 00:05:08.518 { 00:05:08.518 "method": "accel_set_options", 00:05:08.518 "params": { 00:05:08.518 "small_cache_size": 128, 00:05:08.518 "large_cache_size": 16, 00:05:08.518 "task_count": 2048, 00:05:08.518 "sequence_count": 2048, 00:05:08.518 "buf_count": 2048 00:05:08.518 } 00:05:08.518 } 00:05:08.518 ] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "bdev", 00:05:08.518 "config": [ 00:05:08.518 { 00:05:08.518 "method": "bdev_set_options", 00:05:08.518 "params": { 00:05:08.518 "bdev_io_pool_size": 65535, 00:05:08.518 "bdev_io_cache_size": 256, 00:05:08.518 "bdev_auto_examine": true, 00:05:08.518 "iobuf_small_cache_size": 128, 00:05:08.518 "iobuf_large_cache_size": 16 00:05:08.518 } 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "method": "bdev_raid_set_options", 00:05:08.518 "params": { 00:05:08.518 "process_window_size_kb": 1024, 00:05:08.518 "process_max_bandwidth_mb_sec": 0 00:05:08.518 } 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "method": "bdev_iscsi_set_options", 00:05:08.518 "params": { 00:05:08.518 "timeout_sec": 30 00:05:08.518 } 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "method": "bdev_nvme_set_options", 00:05:08.518 "params": { 00:05:08.518 "action_on_timeout": "none", 00:05:08.518 "timeout_us": 0, 00:05:08.518 "timeout_admin_us": 0, 00:05:08.518 "keep_alive_timeout_ms": 10000, 00:05:08.518 "arbitration_burst": 0, 00:05:08.518 "low_priority_weight": 0, 00:05:08.518 "medium_priority_weight": 0, 00:05:08.518 "high_priority_weight": 0, 00:05:08.518 "nvme_adminq_poll_period_us": 10000, 00:05:08.518 "nvme_ioq_poll_period_us": 0, 00:05:08.518 "io_queue_requests": 0, 00:05:08.518 "delay_cmd_submit": true, 00:05:08.518 "transport_retry_count": 4, 00:05:08.518 "bdev_retry_count": 3, 00:05:08.518 "transport_ack_timeout": 0, 00:05:08.518 "ctrlr_loss_timeout_sec": 0, 00:05:08.518 "reconnect_delay_sec": 0, 00:05:08.518 "fast_io_fail_timeout_sec": 0, 00:05:08.518 "disable_auto_failback": false, 00:05:08.518 "generate_uuids": false, 00:05:08.518 "transport_tos": 0, 00:05:08.518 "nvme_error_stat": false, 00:05:08.518 "rdma_srq_size": 0, 00:05:08.518 "io_path_stat": false, 00:05:08.518 "allow_accel_sequence": false, 00:05:08.518 "rdma_max_cq_size": 0, 00:05:08.518 "rdma_cm_event_timeout_ms": 0, 00:05:08.518 "dhchap_digests": [ 00:05:08.518 "sha256", 00:05:08.518 "sha384", 00:05:08.518 "sha512" 00:05:08.518 ], 00:05:08.518 "dhchap_dhgroups": [ 00:05:08.518 "null", 00:05:08.518 "ffdhe2048", 00:05:08.518 "ffdhe3072", 00:05:08.518 "ffdhe4096", 00:05:08.518 "ffdhe6144", 00:05:08.518 "ffdhe8192" 00:05:08.518 ] 00:05:08.518 } 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "method": "bdev_nvme_set_hotplug", 00:05:08.518 "params": { 00:05:08.518 "period_us": 100000, 00:05:08.518 "enable": false 00:05:08.518 } 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "method": "bdev_wait_for_examine" 00:05:08.518 } 00:05:08.518 ] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "scsi", 00:05:08.518 "config": null 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "scheduler", 00:05:08.518 "config": [ 00:05:08.518 { 00:05:08.518 "method": "framework_set_scheduler", 00:05:08.518 "params": { 00:05:08.518 "name": "static" 00:05:08.518 } 00:05:08.518 } 00:05:08.518 ] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "vhost_scsi", 00:05:08.518 "config": [] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "vhost_blk", 00:05:08.518 "config": [] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "ublk", 00:05:08.518 "config": [] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "nbd", 00:05:08.518 "config": [] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "nvmf", 00:05:08.518 "config": [ 00:05:08.518 { 00:05:08.518 "method": "nvmf_set_config", 00:05:08.518 "params": { 00:05:08.518 "discovery_filter": "match_any", 00:05:08.518 "admin_cmd_passthru": { 00:05:08.518 "identify_ctrlr": false 00:05:08.518 } 00:05:08.518 } 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "method": "nvmf_set_max_subsystems", 00:05:08.518 "params": { 00:05:08.518 "max_subsystems": 1024 00:05:08.518 } 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "method": "nvmf_set_crdt", 00:05:08.518 "params": { 00:05:08.518 "crdt1": 0, 00:05:08.518 "crdt2": 0, 00:05:08.518 "crdt3": 0 00:05:08.518 } 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "method": "nvmf_create_transport", 00:05:08.518 "params": { 00:05:08.518 "trtype": "TCP", 00:05:08.518 "max_queue_depth": 128, 00:05:08.518 "max_io_qpairs_per_ctrlr": 127, 00:05:08.518 "in_capsule_data_size": 4096, 00:05:08.518 "max_io_size": 131072, 00:05:08.518 "io_unit_size": 131072, 00:05:08.518 "max_aq_depth": 128, 00:05:08.518 "num_shared_buffers": 511, 00:05:08.518 "buf_cache_size": 4294967295, 00:05:08.518 "dif_insert_or_strip": false, 00:05:08.518 "zcopy": false, 00:05:08.518 "c2h_success": true, 00:05:08.518 "sock_priority": 0, 00:05:08.518 "abort_timeout_sec": 1, 00:05:08.518 "ack_timeout": 0, 00:05:08.518 "data_wr_pool_size": 0 00:05:08.518 } 00:05:08.518 } 00:05:08.518 ] 00:05:08.518 }, 00:05:08.518 { 00:05:08.518 "subsystem": "iscsi", 00:05:08.518 "config": [ 00:05:08.518 { 00:05:08.518 "method": "iscsi_set_options", 00:05:08.518 "params": { 00:05:08.518 "node_base": "iqn.2016-06.io.spdk", 00:05:08.518 "max_sessions": 128, 00:05:08.518 "max_connections_per_session": 2, 00:05:08.518 "max_queue_depth": 64, 00:05:08.518 "default_time2wait": 2, 00:05:08.518 "default_time2retain": 20, 00:05:08.518 "first_burst_length": 8192, 00:05:08.518 "immediate_data": true, 00:05:08.518 "allow_duplicated_isid": false, 00:05:08.518 "error_recovery_level": 0, 00:05:08.518 "nop_timeout": 60, 00:05:08.518 "nop_in_interval": 30, 00:05:08.518 "disable_chap": false, 00:05:08.518 "require_chap": false, 00:05:08.518 "mutual_chap": false, 00:05:08.518 "chap_group": 0, 00:05:08.518 "max_large_datain_per_connection": 64, 00:05:08.518 "max_r2t_per_connection": 4, 00:05:08.518 "pdu_pool_size": 36864, 00:05:08.518 "immediate_data_pool_size": 16384, 00:05:08.518 "data_out_pool_size": 2048 00:05:08.518 } 00:05:08.518 } 00:05:08.518 ] 00:05:08.518 } 00:05:08.518 ] 00:05:08.518 } 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59036 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59036 ']' 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59036 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59036 00:05:08.518 killing process with pid 59036 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59036' 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59036 00:05:08.518 13:47:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59036 00:05:09.087 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59058 00:05:09.087 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.087 13:47:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59058 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59058 ']' 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59058 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59058 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.360 killing process with pid 59058 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59058' 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59058 00:05:14.360 13:48:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59058 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.360 00:05:14.360 real 0m7.057s 00:05:14.360 user 0m6.727s 00:05:14.360 sys 0m0.682s 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.360 ************************************ 00:05:14.360 END TEST skip_rpc_with_json 00:05:14.360 ************************************ 00:05:14.360 13:48:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.360 13:48:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.360 13:48:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.360 13:48:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.360 ************************************ 00:05:14.360 START TEST skip_rpc_with_delay 00:05:14.360 ************************************ 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:14.360 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.619 [2024-07-25 13:48:03.392225] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.619 [2024-07-25 13:48:03.392386] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:14.619 ************************************ 00:05:14.619 END TEST skip_rpc_with_delay 00:05:14.619 ************************************ 00:05:14.619 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:14.619 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.619 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:14.619 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.619 00:05:14.619 real 0m0.091s 00:05:14.619 user 0m0.060s 00:05:14.619 sys 0m0.031s 00:05:14.619 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.619 13:48:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:14.619 13:48:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:14.619 13:48:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:14.619 13:48:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:14.619 13:48:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.619 13:48:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.619 13:48:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.619 ************************************ 00:05:14.619 START TEST exit_on_failed_rpc_init 00:05:14.619 ************************************ 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59173 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59173 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59173 ']' 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.619 13:48:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.619 [2024-07-25 13:48:03.532244] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:14.619 [2024-07-25 13:48:03.532396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:05:14.877 [2024-07-25 13:48:03.671480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.877 [2024-07-25 13:48:03.776268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.877 [2024-07-25 13:48:03.832572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:15.813 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.813 [2024-07-25 13:48:04.586039] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:15.813 [2024-07-25 13:48:04.586613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59191 ] 00:05:15.813 [2024-07-25 13:48:04.725126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.071 [2024-07-25 13:48:04.846734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.071 [2024-07-25 13:48:04.846861] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:16.071 [2024-07-25 13:48:04.846879] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:16.072 [2024-07-25 13:48:04.846890] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59173 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59173 ']' 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59173 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59173 00:05:16.072 killing process with pid 59173 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59173' 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59173 00:05:16.072 13:48:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59173 00:05:16.330 00:05:16.330 real 0m1.872s 00:05:16.330 user 0m2.213s 00:05:16.330 sys 0m0.432s 00:05:16.330 13:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.330 ************************************ 00:05:16.330 END TEST exit_on_failed_rpc_init 00:05:16.330 ************************************ 00:05:16.330 13:48:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.589 13:48:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.589 00:05:16.589 real 0m14.740s 00:05:16.589 user 0m14.156s 00:05:16.589 sys 0m1.600s 00:05:16.589 13:48:05 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.589 ************************************ 00:05:16.589 END TEST skip_rpc 00:05:16.589 ************************************ 00:05:16.589 13:48:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.589 13:48:05 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.589 13:48:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.589 13:48:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.589 13:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.589 ************************************ 00:05:16.589 START TEST rpc_client 00:05:16.589 ************************************ 00:05:16.589 13:48:05 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.589 * Looking for test storage... 00:05:16.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:16.589 13:48:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:16.589 OK 00:05:16.589 13:48:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:16.589 00:05:16.589 real 0m0.095s 00:05:16.589 user 0m0.047s 00:05:16.589 sys 0m0.053s 00:05:16.589 ************************************ 00:05:16.589 END TEST rpc_client 00:05:16.589 ************************************ 00:05:16.589 13:48:05 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.589 13:48:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:16.589 13:48:05 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.589 13:48:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.589 13:48:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.589 13:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.589 ************************************ 00:05:16.589 START TEST json_config 00:05:16.589 ************************************ 00:05:16.589 13:48:05 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.849 13:48:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:16.849 13:48:05 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.849 13:48:05 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.849 13:48:05 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.849 13:48:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.849 13:48:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.849 13:48:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.849 13:48:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:16.849 13:48:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@47 -- # : 0 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:16.849 13:48:05 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:16.849 13:48:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:16.849 13:48:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:16.850 INFO: JSON configuration test init 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.850 Waiting for target to run... 00:05:16.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.850 13:48:05 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:16.850 13:48:05 json_config -- json_config/common.sh@9 -- # local app=target 00:05:16.850 13:48:05 json_config -- json_config/common.sh@10 -- # shift 00:05:16.850 13:48:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.850 13:48:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.850 13:48:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.850 13:48:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.850 13:48:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.850 13:48:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59309 00:05:16.850 13:48:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.850 13:48:05 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:16.850 13:48:05 json_config -- json_config/common.sh@25 -- # waitforlisten 59309 /var/tmp/spdk_tgt.sock 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@831 -- # '[' -z 59309 ']' 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.850 13:48:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.850 [2024-07-25 13:48:05.734782] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:16.850 [2024-07-25 13:48:05.735185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59309 ] 00:05:17.418 [2024-07-25 13:48:06.163080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.418 [2024-07-25 13:48:06.251080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.677 13:48:06 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.677 13:48:06 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:17.677 13:48:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:17.677 00:05:17.677 13:48:06 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:17.677 13:48:06 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:17.677 13:48:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.677 13:48:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.677 13:48:06 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:17.677 13:48:06 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:17.677 13:48:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:17.677 13:48:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.677 13:48:06 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:17.677 13:48:06 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:17.677 13:48:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:17.936 [2024-07-25 13:48:06.934997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:18.195 13:48:07 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:18.195 13:48:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:18.195 13:48:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.195 13:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.195 13:48:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:18.195 13:48:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:18.195 13:48:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:18.195 13:48:07 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:18.195 13:48:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:18.195 13:48:07 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@51 -- # sort 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:18.455 13:48:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.455 13:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:18.455 13:48:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.455 13:48:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:18.455 13:48:07 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.455 13:48:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.714 MallocForNvmf0 00:05:18.714 13:48:07 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.714 13:48:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.973 MallocForNvmf1 00:05:18.973 13:48:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.973 13:48:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.232 [2024-07-25 13:48:08.113188] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.232 13:48:08 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.232 13:48:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.490 13:48:08 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.491 13:48:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.749 13:48:08 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.749 13:48:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.749 13:48:08 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.749 13:48:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.008 [2024-07-25 13:48:08.933644] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.008 13:48:08 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:20.008 13:48:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.008 13:48:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.008 13:48:08 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:20.008 13:48:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.008 13:48:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.008 13:48:09 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:20.008 13:48:09 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.008 13:48:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.574 MallocBdevForConfigChangeCheck 00:05:20.574 13:48:09 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:20.574 13:48:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.574 13:48:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.574 13:48:09 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:20.574 13:48:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.832 INFO: shutting down applications... 00:05:20.832 13:48:09 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:20.832 13:48:09 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:20.832 13:48:09 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:20.832 13:48:09 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:20.832 13:48:09 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:21.089 Calling clear_iscsi_subsystem 00:05:21.089 Calling clear_nvmf_subsystem 00:05:21.089 Calling clear_nbd_subsystem 00:05:21.089 Calling clear_ublk_subsystem 00:05:21.089 Calling clear_vhost_blk_subsystem 00:05:21.089 Calling clear_vhost_scsi_subsystem 00:05:21.089 Calling clear_bdev_subsystem 00:05:21.089 13:48:09 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:21.089 13:48:09 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:21.089 13:48:09 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:21.089 13:48:09 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.089 13:48:09 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:21.089 13:48:09 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.347 13:48:10 json_config -- json_config/json_config.sh@349 -- # break 00:05:21.347 13:48:10 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:21.347 13:48:10 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:21.347 13:48:10 json_config -- json_config/common.sh@31 -- # local app=target 00:05:21.347 13:48:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.347 13:48:10 json_config -- json_config/common.sh@35 -- # [[ -n 59309 ]] 00:05:21.347 13:48:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59309 00:05:21.347 13:48:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.347 13:48:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.347 13:48:10 json_config -- json_config/common.sh@41 -- # kill -0 59309 00:05:21.347 13:48:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.913 13:48:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.913 13:48:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.913 13:48:10 json_config -- json_config/common.sh@41 -- # kill -0 59309 00:05:21.913 13:48:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.913 13:48:10 json_config -- json_config/common.sh@43 -- # break 00:05:21.913 13:48:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.913 13:48:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.913 SPDK target shutdown done 00:05:21.913 13:48:10 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:21.913 INFO: relaunching applications... 00:05:21.913 13:48:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.913 Waiting for target to run... 00:05:21.913 13:48:10 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.913 13:48:10 json_config -- json_config/common.sh@10 -- # shift 00:05:21.913 13:48:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.913 13:48:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.913 13:48:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.913 13:48:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.913 13:48:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.913 13:48:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59500 00:05:21.913 13:48:10 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.913 13:48:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.913 13:48:10 json_config -- json_config/common.sh@25 -- # waitforlisten 59500 /var/tmp/spdk_tgt.sock 00:05:21.913 13:48:10 json_config -- common/autotest_common.sh@831 -- # '[' -z 59500 ']' 00:05:21.913 13:48:10 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.913 13:48:10 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.913 13:48:10 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.913 13:48:10 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.913 13:48:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.913 [2024-07-25 13:48:10.898042] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:21.913 [2024-07-25 13:48:10.898144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59500 ] 00:05:22.481 [2024-07-25 13:48:11.303198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.481 [2024-07-25 13:48:11.377179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.481 [2024-07-25 13:48:11.502927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.739 [2024-07-25 13:48:11.704393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.739 [2024-07-25 13:48:11.736459] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:22.997 00:05:22.997 INFO: Checking if target configuration is the same... 00:05:22.997 13:48:11 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.997 13:48:11 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:22.997 13:48:11 json_config -- json_config/common.sh@26 -- # echo '' 00:05:22.997 13:48:11 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:22.997 13:48:11 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:22.997 13:48:11 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.997 13:48:11 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:22.997 13:48:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.997 + '[' 2 -ne 2 ']' 00:05:22.997 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:22.997 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:22.997 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:22.997 +++ basename /dev/fd/62 00:05:22.997 ++ mktemp /tmp/62.XXX 00:05:22.997 + tmp_file_1=/tmp/62.7LV 00:05:22.997 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.997 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.997 + tmp_file_2=/tmp/spdk_tgt_config.json.LyT 00:05:22.997 + ret=0 00:05:22.997 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.255 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.255 + diff -u /tmp/62.7LV /tmp/spdk_tgt_config.json.LyT 00:05:23.255 INFO: JSON config files are the same 00:05:23.255 + echo 'INFO: JSON config files are the same' 00:05:23.255 + rm /tmp/62.7LV /tmp/spdk_tgt_config.json.LyT 00:05:23.255 + exit 0 00:05:23.255 INFO: changing configuration and checking if this can be detected... 00:05:23.255 13:48:12 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:23.255 13:48:12 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.255 13:48:12 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.255 13:48:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.514 13:48:12 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.514 13:48:12 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:23.514 13:48:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.514 + '[' 2 -ne 2 ']' 00:05:23.514 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:23.514 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:23.514 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:23.514 +++ basename /dev/fd/62 00:05:23.514 ++ mktemp /tmp/62.XXX 00:05:23.514 + tmp_file_1=/tmp/62.q0b 00:05:23.514 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.514 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.514 + tmp_file_2=/tmp/spdk_tgt_config.json.7nn 00:05:23.514 + ret=0 00:05:23.514 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.080 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.080 + diff -u /tmp/62.q0b /tmp/spdk_tgt_config.json.7nn 00:05:24.080 + ret=1 00:05:24.080 + echo '=== Start of file: /tmp/62.q0b ===' 00:05:24.080 + cat /tmp/62.q0b 00:05:24.080 + echo '=== End of file: /tmp/62.q0b ===' 00:05:24.080 + echo '' 00:05:24.080 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7nn ===' 00:05:24.080 + cat /tmp/spdk_tgt_config.json.7nn 00:05:24.080 + echo '=== End of file: /tmp/spdk_tgt_config.json.7nn ===' 00:05:24.080 + echo '' 00:05:24.080 + rm /tmp/62.q0b /tmp/spdk_tgt_config.json.7nn 00:05:24.080 + exit 1 00:05:24.080 INFO: configuration change detected. 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@321 -- # [[ -n 59500 ]] 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.080 13:48:12 json_config -- json_config/json_config.sh@327 -- # killprocess 59500 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@950 -- # '[' -z 59500 ']' 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@954 -- # kill -0 59500 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@955 -- # uname 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.080 13:48:12 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59500 00:05:24.080 killing process with pid 59500 00:05:24.080 13:48:13 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.080 13:48:13 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.080 13:48:13 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59500' 00:05:24.080 13:48:13 json_config -- common/autotest_common.sh@969 -- # kill 59500 00:05:24.080 13:48:13 json_config -- common/autotest_common.sh@974 -- # wait 59500 00:05:24.338 13:48:13 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.338 13:48:13 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:24.338 13:48:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.338 13:48:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.338 13:48:13 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:24.338 INFO: Success 00:05:24.338 13:48:13 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:24.338 00:05:24.338 real 0m7.734s 00:05:24.338 user 0m10.789s 00:05:24.338 sys 0m1.643s 00:05:24.338 13:48:13 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.338 13:48:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.338 ************************************ 00:05:24.338 END TEST json_config 00:05:24.338 ************************************ 00:05:24.338 13:48:13 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.338 13:48:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.338 13:48:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.338 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:05:24.338 ************************************ 00:05:24.338 START TEST json_config_extra_key 00:05:24.338 ************************************ 00:05:24.338 13:48:13 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.595 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.595 13:48:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.595 13:48:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.595 13:48:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.595 13:48:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.595 13:48:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.595 13:48:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.595 13:48:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:24.595 13:48:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.595 13:48:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.595 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:24.595 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.595 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.595 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.595 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.595 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.595 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.596 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:24.596 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.596 INFO: launching applications... 00:05:24.596 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.596 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.596 13:48:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.596 Waiting for target to run... 00:05:24.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59635 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59635 /var/tmp/spdk_tgt.sock 00:05:24.596 13:48:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.596 13:48:13 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59635 ']' 00:05:24.596 13:48:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.596 13:48:13 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.596 13:48:13 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.596 13:48:13 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.596 13:48:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.596 [2024-07-25 13:48:13.484668] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:24.596 [2024-07-25 13:48:13.484758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59635 ] 00:05:25.202 [2024-07-25 13:48:13.892352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.202 [2024-07-25 13:48:13.967935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.202 [2024-07-25 13:48:13.988005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:25.459 00:05:25.459 INFO: shutting down applications... 00:05:25.459 13:48:14 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.459 13:48:14 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:25.459 13:48:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.459 13:48:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59635 ]] 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59635 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59635 00:05:25.459 13:48:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.024 13:48:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.024 13:48:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.024 13:48:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59635 00:05:26.024 13:48:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:26.024 13:48:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:26.024 13:48:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:26.024 13:48:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:26.024 SPDK target shutdown done 00:05:26.024 13:48:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:26.024 Success 00:05:26.024 00:05:26.024 real 0m1.627s 00:05:26.024 user 0m1.543s 00:05:26.024 sys 0m0.423s 00:05:26.024 13:48:14 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.024 13:48:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.024 ************************************ 00:05:26.024 END TEST json_config_extra_key 00:05:26.024 ************************************ 00:05:26.024 13:48:15 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.024 13:48:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.025 13:48:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.025 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.025 ************************************ 00:05:26.025 START TEST alias_rpc 00:05:26.025 ************************************ 00:05:26.025 13:48:15 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.283 * Looking for test storage... 00:05:26.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:26.283 13:48:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.283 13:48:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59705 00:05:26.283 13:48:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.283 13:48:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59705 00:05:26.283 13:48:15 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59705 ']' 00:05:26.283 13:48:15 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.283 13:48:15 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.283 13:48:15 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.283 13:48:15 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.283 13:48:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.283 [2024-07-25 13:48:15.204354] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:26.283 [2024-07-25 13:48:15.205383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:05:26.542 [2024-07-25 13:48:15.343122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.542 [2024-07-25 13:48:15.458912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.542 [2024-07-25 13:48:15.513814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:27.475 13:48:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:27.475 13:48:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59705 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59705 ']' 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59705 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59705 00:05:27.475 killing process with pid 59705 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59705' 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@969 -- # kill 59705 00:05:27.475 13:48:16 alias_rpc -- common/autotest_common.sh@974 -- # wait 59705 00:05:28.044 ************************************ 00:05:28.044 END TEST alias_rpc 00:05:28.044 ************************************ 00:05:28.044 00:05:28.044 real 0m1.813s 00:05:28.044 user 0m2.055s 00:05:28.044 sys 0m0.434s 00:05:28.044 13:48:16 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.044 13:48:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.044 13:48:16 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:28.044 13:48:16 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:28.044 13:48:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.044 13:48:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.044 13:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.044 ************************************ 00:05:28.044 START TEST spdkcli_tcp 00:05:28.044 ************************************ 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:28.044 * Looking for test storage... 00:05:28.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59781 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59781 00:05:28.044 13:48:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59781 ']' 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.044 13:48:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.044 [2024-07-25 13:48:17.069814] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:28.044 [2024-07-25 13:48:17.069931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59781 ] 00:05:28.302 [2024-07-25 13:48:17.207733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.302 [2024-07-25 13:48:17.319947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.302 [2024-07-25 13:48:17.319954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.560 [2024-07-25 13:48:17.372838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:29.126 13:48:18 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.126 13:48:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:29.126 13:48:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59798 00:05:29.126 13:48:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:29.126 13:48:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.384 [ 00:05:29.384 "bdev_malloc_delete", 00:05:29.384 "bdev_malloc_create", 00:05:29.384 "bdev_null_resize", 00:05:29.384 "bdev_null_delete", 00:05:29.384 "bdev_null_create", 00:05:29.384 "bdev_nvme_cuse_unregister", 00:05:29.384 "bdev_nvme_cuse_register", 00:05:29.384 "bdev_opal_new_user", 00:05:29.384 "bdev_opal_set_lock_state", 00:05:29.384 "bdev_opal_delete", 00:05:29.384 "bdev_opal_get_info", 00:05:29.384 "bdev_opal_create", 00:05:29.384 "bdev_nvme_opal_revert", 00:05:29.384 "bdev_nvme_opal_init", 00:05:29.384 "bdev_nvme_send_cmd", 00:05:29.384 "bdev_nvme_get_path_iostat", 00:05:29.384 "bdev_nvme_get_mdns_discovery_info", 00:05:29.384 "bdev_nvme_stop_mdns_discovery", 00:05:29.384 "bdev_nvme_start_mdns_discovery", 00:05:29.384 "bdev_nvme_set_multipath_policy", 00:05:29.384 "bdev_nvme_set_preferred_path", 00:05:29.384 "bdev_nvme_get_io_paths", 00:05:29.384 "bdev_nvme_remove_error_injection", 00:05:29.384 "bdev_nvme_add_error_injection", 00:05:29.384 "bdev_nvme_get_discovery_info", 00:05:29.384 "bdev_nvme_stop_discovery", 00:05:29.384 "bdev_nvme_start_discovery", 00:05:29.384 "bdev_nvme_get_controller_health_info", 00:05:29.384 "bdev_nvme_disable_controller", 00:05:29.384 "bdev_nvme_enable_controller", 00:05:29.384 "bdev_nvme_reset_controller", 00:05:29.384 "bdev_nvme_get_transport_statistics", 00:05:29.384 "bdev_nvme_apply_firmware", 00:05:29.384 "bdev_nvme_detach_controller", 00:05:29.384 "bdev_nvme_get_controllers", 00:05:29.384 "bdev_nvme_attach_controller", 00:05:29.384 "bdev_nvme_set_hotplug", 00:05:29.384 "bdev_nvme_set_options", 00:05:29.384 "bdev_passthru_delete", 00:05:29.384 "bdev_passthru_create", 00:05:29.384 "bdev_lvol_set_parent_bdev", 00:05:29.384 "bdev_lvol_set_parent", 00:05:29.384 "bdev_lvol_check_shallow_copy", 00:05:29.384 "bdev_lvol_start_shallow_copy", 00:05:29.384 "bdev_lvol_grow_lvstore", 00:05:29.384 "bdev_lvol_get_lvols", 00:05:29.384 "bdev_lvol_get_lvstores", 00:05:29.384 "bdev_lvol_delete", 00:05:29.384 "bdev_lvol_set_read_only", 00:05:29.384 "bdev_lvol_resize", 00:05:29.384 "bdev_lvol_decouple_parent", 00:05:29.384 "bdev_lvol_inflate", 00:05:29.384 "bdev_lvol_rename", 00:05:29.384 "bdev_lvol_clone_bdev", 00:05:29.384 "bdev_lvol_clone", 00:05:29.384 "bdev_lvol_snapshot", 00:05:29.384 "bdev_lvol_create", 00:05:29.384 "bdev_lvol_delete_lvstore", 00:05:29.384 "bdev_lvol_rename_lvstore", 00:05:29.384 "bdev_lvol_create_lvstore", 00:05:29.384 "bdev_raid_set_options", 00:05:29.384 "bdev_raid_remove_base_bdev", 00:05:29.384 "bdev_raid_add_base_bdev", 00:05:29.384 "bdev_raid_delete", 00:05:29.384 "bdev_raid_create", 00:05:29.384 "bdev_raid_get_bdevs", 00:05:29.384 "bdev_error_inject_error", 00:05:29.384 "bdev_error_delete", 00:05:29.384 "bdev_error_create", 00:05:29.384 "bdev_split_delete", 00:05:29.384 "bdev_split_create", 00:05:29.384 "bdev_delay_delete", 00:05:29.384 "bdev_delay_create", 00:05:29.384 "bdev_delay_update_latency", 00:05:29.384 "bdev_zone_block_delete", 00:05:29.384 "bdev_zone_block_create", 00:05:29.384 "blobfs_create", 00:05:29.384 "blobfs_detect", 00:05:29.384 "blobfs_set_cache_size", 00:05:29.384 "bdev_aio_delete", 00:05:29.384 "bdev_aio_rescan", 00:05:29.384 "bdev_aio_create", 00:05:29.384 "bdev_ftl_set_property", 00:05:29.384 "bdev_ftl_get_properties", 00:05:29.384 "bdev_ftl_get_stats", 00:05:29.384 "bdev_ftl_unmap", 00:05:29.384 "bdev_ftl_unload", 00:05:29.384 "bdev_ftl_delete", 00:05:29.384 "bdev_ftl_load", 00:05:29.384 "bdev_ftl_create", 00:05:29.384 "bdev_virtio_attach_controller", 00:05:29.384 "bdev_virtio_scsi_get_devices", 00:05:29.384 "bdev_virtio_detach_controller", 00:05:29.384 "bdev_virtio_blk_set_hotplug", 00:05:29.384 "bdev_iscsi_delete", 00:05:29.384 "bdev_iscsi_create", 00:05:29.384 "bdev_iscsi_set_options", 00:05:29.384 "bdev_uring_delete", 00:05:29.384 "bdev_uring_rescan", 00:05:29.384 "bdev_uring_create", 00:05:29.384 "accel_error_inject_error", 00:05:29.384 "ioat_scan_accel_module", 00:05:29.384 "dsa_scan_accel_module", 00:05:29.384 "iaa_scan_accel_module", 00:05:29.384 "keyring_file_remove_key", 00:05:29.384 "keyring_file_add_key", 00:05:29.384 "keyring_linux_set_options", 00:05:29.384 "iscsi_get_histogram", 00:05:29.384 "iscsi_enable_histogram", 00:05:29.384 "iscsi_set_options", 00:05:29.384 "iscsi_get_auth_groups", 00:05:29.384 "iscsi_auth_group_remove_secret", 00:05:29.384 "iscsi_auth_group_add_secret", 00:05:29.384 "iscsi_delete_auth_group", 00:05:29.384 "iscsi_create_auth_group", 00:05:29.384 "iscsi_set_discovery_auth", 00:05:29.384 "iscsi_get_options", 00:05:29.384 "iscsi_target_node_request_logout", 00:05:29.384 "iscsi_target_node_set_redirect", 00:05:29.384 "iscsi_target_node_set_auth", 00:05:29.384 "iscsi_target_node_add_lun", 00:05:29.384 "iscsi_get_stats", 00:05:29.384 "iscsi_get_connections", 00:05:29.384 "iscsi_portal_group_set_auth", 00:05:29.384 "iscsi_start_portal_group", 00:05:29.384 "iscsi_delete_portal_group", 00:05:29.384 "iscsi_create_portal_group", 00:05:29.384 "iscsi_get_portal_groups", 00:05:29.384 "iscsi_delete_target_node", 00:05:29.384 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.384 "iscsi_target_node_add_pg_ig_maps", 00:05:29.384 "iscsi_create_target_node", 00:05:29.384 "iscsi_get_target_nodes", 00:05:29.384 "iscsi_delete_initiator_group", 00:05:29.384 "iscsi_initiator_group_remove_initiators", 00:05:29.384 "iscsi_initiator_group_add_initiators", 00:05:29.384 "iscsi_create_initiator_group", 00:05:29.384 "iscsi_get_initiator_groups", 00:05:29.384 "nvmf_set_crdt", 00:05:29.384 "nvmf_set_config", 00:05:29.384 "nvmf_set_max_subsystems", 00:05:29.384 "nvmf_stop_mdns_prr", 00:05:29.384 "nvmf_publish_mdns_prr", 00:05:29.384 "nvmf_subsystem_get_listeners", 00:05:29.384 "nvmf_subsystem_get_qpairs", 00:05:29.384 "nvmf_subsystem_get_controllers", 00:05:29.384 "nvmf_get_stats", 00:05:29.384 "nvmf_get_transports", 00:05:29.384 "nvmf_create_transport", 00:05:29.384 "nvmf_get_targets", 00:05:29.384 "nvmf_delete_target", 00:05:29.384 "nvmf_create_target", 00:05:29.384 "nvmf_subsystem_allow_any_host", 00:05:29.385 "nvmf_subsystem_remove_host", 00:05:29.385 "nvmf_subsystem_add_host", 00:05:29.385 "nvmf_ns_remove_host", 00:05:29.385 "nvmf_ns_add_host", 00:05:29.385 "nvmf_subsystem_remove_ns", 00:05:29.385 "nvmf_subsystem_add_ns", 00:05:29.385 "nvmf_subsystem_listener_set_ana_state", 00:05:29.385 "nvmf_discovery_get_referrals", 00:05:29.385 "nvmf_discovery_remove_referral", 00:05:29.385 "nvmf_discovery_add_referral", 00:05:29.385 "nvmf_subsystem_remove_listener", 00:05:29.385 "nvmf_subsystem_add_listener", 00:05:29.385 "nvmf_delete_subsystem", 00:05:29.385 "nvmf_create_subsystem", 00:05:29.385 "nvmf_get_subsystems", 00:05:29.385 "env_dpdk_get_mem_stats", 00:05:29.385 "nbd_get_disks", 00:05:29.385 "nbd_stop_disk", 00:05:29.385 "nbd_start_disk", 00:05:29.385 "ublk_recover_disk", 00:05:29.385 "ublk_get_disks", 00:05:29.385 "ublk_stop_disk", 00:05:29.385 "ublk_start_disk", 00:05:29.385 "ublk_destroy_target", 00:05:29.385 "ublk_create_target", 00:05:29.385 "virtio_blk_create_transport", 00:05:29.385 "virtio_blk_get_transports", 00:05:29.385 "vhost_controller_set_coalescing", 00:05:29.385 "vhost_get_controllers", 00:05:29.385 "vhost_delete_controller", 00:05:29.385 "vhost_create_blk_controller", 00:05:29.385 "vhost_scsi_controller_remove_target", 00:05:29.385 "vhost_scsi_controller_add_target", 00:05:29.385 "vhost_start_scsi_controller", 00:05:29.385 "vhost_create_scsi_controller", 00:05:29.385 "thread_set_cpumask", 00:05:29.385 "framework_get_governor", 00:05:29.385 "framework_get_scheduler", 00:05:29.385 "framework_set_scheduler", 00:05:29.385 "framework_get_reactors", 00:05:29.385 "thread_get_io_channels", 00:05:29.385 "thread_get_pollers", 00:05:29.385 "thread_get_stats", 00:05:29.385 "framework_monitor_context_switch", 00:05:29.385 "spdk_kill_instance", 00:05:29.385 "log_enable_timestamps", 00:05:29.385 "log_get_flags", 00:05:29.385 "log_clear_flag", 00:05:29.385 "log_set_flag", 00:05:29.385 "log_get_level", 00:05:29.385 "log_set_level", 00:05:29.385 "log_get_print_level", 00:05:29.385 "log_set_print_level", 00:05:29.385 "framework_enable_cpumask_locks", 00:05:29.385 "framework_disable_cpumask_locks", 00:05:29.385 "framework_wait_init", 00:05:29.385 "framework_start_init", 00:05:29.385 "scsi_get_devices", 00:05:29.385 "bdev_get_histogram", 00:05:29.385 "bdev_enable_histogram", 00:05:29.385 "bdev_set_qos_limit", 00:05:29.385 "bdev_set_qd_sampling_period", 00:05:29.385 "bdev_get_bdevs", 00:05:29.385 "bdev_reset_iostat", 00:05:29.385 "bdev_get_iostat", 00:05:29.385 "bdev_examine", 00:05:29.385 "bdev_wait_for_examine", 00:05:29.385 "bdev_set_options", 00:05:29.385 "notify_get_notifications", 00:05:29.385 "notify_get_types", 00:05:29.385 "accel_get_stats", 00:05:29.385 "accel_set_options", 00:05:29.385 "accel_set_driver", 00:05:29.385 "accel_crypto_key_destroy", 00:05:29.385 "accel_crypto_keys_get", 00:05:29.385 "accel_crypto_key_create", 00:05:29.385 "accel_assign_opc", 00:05:29.385 "accel_get_module_info", 00:05:29.385 "accel_get_opc_assignments", 00:05:29.385 "vmd_rescan", 00:05:29.385 "vmd_remove_device", 00:05:29.385 "vmd_enable", 00:05:29.385 "sock_get_default_impl", 00:05:29.385 "sock_set_default_impl", 00:05:29.385 "sock_impl_set_options", 00:05:29.385 "sock_impl_get_options", 00:05:29.385 "iobuf_get_stats", 00:05:29.385 "iobuf_set_options", 00:05:29.385 "framework_get_pci_devices", 00:05:29.385 "framework_get_config", 00:05:29.385 "framework_get_subsystems", 00:05:29.385 "trace_get_info", 00:05:29.385 "trace_get_tpoint_group_mask", 00:05:29.385 "trace_disable_tpoint_group", 00:05:29.385 "trace_enable_tpoint_group", 00:05:29.385 "trace_clear_tpoint_mask", 00:05:29.385 "trace_set_tpoint_mask", 00:05:29.385 "keyring_get_keys", 00:05:29.385 "spdk_get_version", 00:05:29.385 "rpc_get_methods" 00:05:29.385 ] 00:05:29.385 13:48:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.385 13:48:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.385 13:48:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59781 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59781 ']' 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59781 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59781 00:05:29.385 killing process with pid 59781 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59781' 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59781 00:05:29.385 13:48:18 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59781 00:05:29.951 ************************************ 00:05:29.951 END TEST spdkcli_tcp 00:05:29.951 ************************************ 00:05:29.951 00:05:29.951 real 0m1.875s 00:05:29.951 user 0m3.521s 00:05:29.951 sys 0m0.471s 00:05:29.951 13:48:18 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.951 13:48:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.951 13:48:18 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.951 13:48:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.951 13:48:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.951 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.951 ************************************ 00:05:29.951 START TEST dpdk_mem_utility 00:05:29.951 ************************************ 00:05:29.952 13:48:18 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.952 * Looking for test storage... 00:05:29.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:29.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.952 13:48:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:29.952 13:48:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59868 00:05:29.952 13:48:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59868 00:05:29.952 13:48:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.952 13:48:18 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59868 ']' 00:05:29.952 13:48:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.952 13:48:18 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.952 13:48:18 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.952 13:48:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.952 13:48:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.952 [2024-07-25 13:48:18.969583] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:29.952 [2024-07-25 13:48:18.969686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:05:30.278 [2024-07-25 13:48:19.109902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.278 [2024-07-25 13:48:19.205682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.278 [2024-07-25 13:48:19.259525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.217 13:48:19 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.217 13:48:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:31.217 13:48:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:31.217 13:48:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:31.218 13:48:19 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.218 13:48:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.218 { 00:05:31.218 "filename": "/tmp/spdk_mem_dump.txt" 00:05:31.218 } 00:05:31.218 13:48:19 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.218 13:48:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:31.218 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:31.218 1 heaps totaling size 814.000000 MiB 00:05:31.218 size: 814.000000 MiB heap id: 0 00:05:31.218 end heaps---------- 00:05:31.218 8 mempools totaling size 598.116089 MiB 00:05:31.218 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:31.218 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:31.218 size: 84.521057 MiB name: bdev_io_59868 00:05:31.218 size: 51.011292 MiB name: evtpool_59868 00:05:31.218 size: 50.003479 MiB name: msgpool_59868 00:05:31.218 size: 21.763794 MiB name: PDU_Pool 00:05:31.218 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:31.218 size: 0.026123 MiB name: Session_Pool 00:05:31.218 end mempools------- 00:05:31.218 6 memzones totaling size 4.142822 MiB 00:05:31.218 size: 1.000366 MiB name: RG_ring_0_59868 00:05:31.218 size: 1.000366 MiB name: RG_ring_1_59868 00:05:31.218 size: 1.000366 MiB name: RG_ring_4_59868 00:05:31.218 size: 1.000366 MiB name: RG_ring_5_59868 00:05:31.218 size: 0.125366 MiB name: RG_ring_2_59868 00:05:31.218 size: 0.015991 MiB name: RG_ring_3_59868 00:05:31.218 end memzones------- 00:05:31.218 13:48:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:31.218 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:05:31.218 list of free elements. size: 12.471558 MiB 00:05:31.218 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:31.218 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:31.218 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:31.218 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:31.218 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:31.218 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:31.218 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:31.218 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:31.218 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:31.218 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:05:31.218 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:31.218 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:31.218 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:31.218 element at address: 0x200027e00000 with size: 0.395935 MiB 00:05:31.218 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:31.218 list of standard malloc elements. size: 199.265869 MiB 00:05:31.218 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:31.218 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:31.218 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:31.218 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:31.218 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:31.218 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:31.218 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:31.218 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:31.218 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:31.218 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:31.218 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:31.219 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e65680 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:31.219 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:31.220 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:31.220 list of memzone associated elements. size: 602.262573 MiB 00:05:31.220 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:31.220 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:31.220 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:31.220 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:31.220 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:31.220 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59868_0 00:05:31.220 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:31.220 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59868_0 00:05:31.220 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:31.220 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59868_0 00:05:31.220 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:31.220 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:31.220 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:31.220 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:31.220 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:31.220 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59868 00:05:31.220 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:31.220 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59868 00:05:31.220 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:31.220 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59868 00:05:31.220 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:31.220 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:31.220 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:31.220 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:31.220 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:31.220 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:31.220 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:31.220 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:31.220 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:31.220 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59868 00:05:31.220 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:31.220 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59868 00:05:31.220 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:31.220 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59868 00:05:31.220 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:31.220 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59868 00:05:31.220 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:31.220 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59868 00:05:31.220 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:31.220 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:31.220 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:31.220 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:31.220 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:31.220 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:31.220 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:31.220 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59868 00:05:31.220 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:31.220 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:31.220 element at address: 0x200027e65740 with size: 0.023743 MiB 00:05:31.220 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:31.220 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:31.220 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59868 00:05:31.220 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:05:31.220 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:31.220 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:31.220 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59868 00:05:31.220 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:31.220 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59868 00:05:31.220 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:05:31.220 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:31.220 13:48:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:31.220 13:48:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59868 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59868 ']' 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59868 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59868 00:05:31.220 killing process with pid 59868 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59868' 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59868 00:05:31.220 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59868 00:05:31.787 00:05:31.787 real 0m1.712s 00:05:31.787 user 0m1.851s 00:05:31.787 sys 0m0.437s 00:05:31.787 ************************************ 00:05:31.787 END TEST dpdk_mem_utility 00:05:31.787 ************************************ 00:05:31.787 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.787 13:48:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.787 13:48:20 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.787 13:48:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.787 13:48:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.787 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:05:31.787 ************************************ 00:05:31.787 START TEST event 00:05:31.787 ************************************ 00:05:31.787 13:48:20 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.787 * Looking for test storage... 00:05:31.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.787 13:48:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:31.787 13:48:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.787 13:48:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.787 13:48:20 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:31.787 13:48:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.787 13:48:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.787 ************************************ 00:05:31.787 START TEST event_perf 00:05:31.787 ************************************ 00:05:31.787 13:48:20 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.787 Running I/O for 1 seconds...[2024-07-25 13:48:20.692661] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:31.787 [2024-07-25 13:48:20.692772] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:05:32.046 [2024-07-25 13:48:20.827603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.046 [2024-07-25 13:48:20.931378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.046 [2024-07-25 13:48:20.931487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.046 [2024-07-25 13:48:20.931622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.046 [2024-07-25 13:48:20.931626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.983 Running I/O for 1 seconds... 00:05:32.983 lcore 0: 192794 00:05:32.983 lcore 1: 192794 00:05:32.983 lcore 2: 192792 00:05:32.983 lcore 3: 192792 00:05:32.983 done. 00:05:32.983 00:05:32.983 real 0m1.337s 00:05:32.983 ************************************ 00:05:32.983 END TEST event_perf 00:05:32.983 ************************************ 00:05:32.983 user 0m4.128s 00:05:32.983 sys 0m0.062s 00:05:32.983 13:48:22 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.983 13:48:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.240 13:48:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.240 13:48:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:33.240 13:48:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.240 13:48:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.240 ************************************ 00:05:33.240 START TEST event_reactor 00:05:33.240 ************************************ 00:05:33.240 13:48:22 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.240 [2024-07-25 13:48:22.081200] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:33.240 [2024-07-25 13:48:22.081295] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59981 ] 00:05:33.240 [2024-07-25 13:48:22.219681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.498 [2024-07-25 13:48:22.322352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.434 test_start 00:05:34.434 oneshot 00:05:34.434 tick 100 00:05:34.434 tick 100 00:05:34.434 tick 250 00:05:34.434 tick 100 00:05:34.434 tick 100 00:05:34.434 tick 100 00:05:34.434 tick 250 00:05:34.434 tick 500 00:05:34.434 tick 100 00:05:34.434 tick 100 00:05:34.434 tick 250 00:05:34.434 tick 100 00:05:34.434 tick 100 00:05:34.434 test_end 00:05:34.434 00:05:34.434 real 0m1.342s 00:05:34.434 user 0m1.181s 00:05:34.434 sys 0m0.055s 00:05:34.434 ************************************ 00:05:34.434 END TEST event_reactor 00:05:34.434 ************************************ 00:05:34.434 13:48:23 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.434 13:48:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.434 13:48:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.434 13:48:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:34.434 13:48:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.434 13:48:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.434 ************************************ 00:05:34.434 START TEST event_reactor_perf 00:05:34.434 ************************************ 00:05:34.434 13:48:23 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.694 [2024-07-25 13:48:23.473984] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:34.694 [2024-07-25 13:48:23.474070] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60012 ] 00:05:34.694 [2024-07-25 13:48:23.605392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.694 [2024-07-25 13:48:23.716404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.068 test_start 00:05:36.068 test_end 00:05:36.068 Performance: 390000 events per second 00:05:36.068 00:05:36.068 real 0m1.349s 00:05:36.068 user 0m1.194s 00:05:36.068 sys 0m0.050s 00:05:36.068 13:48:24 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.068 13:48:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.068 ************************************ 00:05:36.068 END TEST event_reactor_perf 00:05:36.068 ************************************ 00:05:36.068 13:48:24 event -- event/event.sh@49 -- # uname -s 00:05:36.068 13:48:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.068 13:48:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:36.068 13:48:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.068 13:48:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.068 13:48:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.068 ************************************ 00:05:36.068 START TEST event_scheduler 00:05:36.068 ************************************ 00:05:36.068 13:48:24 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:36.069 * Looking for test storage... 00:05:36.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:36.069 13:48:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.069 13:48:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60074 00:05:36.069 13:48:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.069 13:48:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60074 00:05:36.069 13:48:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.069 13:48:24 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60074 ']' 00:05:36.069 13:48:24 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.069 13:48:24 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.069 13:48:24 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.069 13:48:24 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.069 13:48:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.069 [2024-07-25 13:48:24.998743] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:36.069 [2024-07-25 13:48:24.998844] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60074 ] 00:05:36.327 [2024-07-25 13:48:25.141906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.327 [2024-07-25 13:48:25.282908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.327 [2024-07-25 13:48:25.283015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.327 [2024-07-25 13:48:25.283155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.327 [2024-07-25 13:48:25.283164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.263 13:48:25 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.263 13:48:25 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:37.263 13:48:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.263 13:48:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.263 13:48:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.263 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.263 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.263 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.263 POWER: Cannot set governor of lcore 0 to performance 00:05:37.263 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.263 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.263 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.264 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.264 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:37.264 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:37.264 POWER: Unable to set Power Management Environment for lcore 0 00:05:37.264 [2024-07-25 13:48:25.960016] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:37.264 [2024-07-25 13:48:25.960328] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:37.264 [2024-07-25 13:48:25.960575] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.264 [2024-07-25 13:48:25.960825] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.264 [2024-07-25 13:48:25.961069] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.264 [2024-07-25 13:48:25.961316] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.264 13:48:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.264 13:48:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 [2024-07-25 13:48:26.023829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:37.264 [2024-07-25 13:48:26.058455] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.264 13:48:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.264 13:48:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.264 13:48:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 ************************************ 00:05:37.264 START TEST scheduler_create_thread 00:05:37.264 ************************************ 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 2 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 3 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 4 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 5 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 6 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 7 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 8 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 9 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 10 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.264 13:48:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.639 ************************************ 00:05:38.639 END TEST scheduler_create_thread 00:05:38.639 ************************************ 00:05:38.639 13:48:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.639 00:05:38.639 real 0m1.168s 00:05:38.639 user 0m0.015s 00:05:38.639 sys 0m0.006s 00:05:38.639 13:48:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.639 13:48:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.639 13:48:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.639 13:48:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60074 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60074 ']' 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60074 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60074 00:05:38.639 killing process with pid 60074 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60074' 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60074 00:05:38.639 13:48:27 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60074 00:05:38.897 [2024-07-25 13:48:27.717372] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:39.155 ************************************ 00:05:39.155 END TEST event_scheduler 00:05:39.155 ************************************ 00:05:39.155 00:05:39.155 real 0m3.101s 00:05:39.155 user 0m5.367s 00:05:39.155 sys 0m0.373s 00:05:39.155 13:48:27 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.155 13:48:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.155 13:48:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:39.155 13:48:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:39.155 13:48:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.155 13:48:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.155 13:48:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.155 ************************************ 00:05:39.155 START TEST app_repeat 00:05:39.155 ************************************ 00:05:39.155 13:48:28 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60157 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.155 Process app_repeat pid: 60157 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60157' 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.155 spdk_app_start Round 0 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:39.155 13:48:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60157 /var/tmp/spdk-nbd.sock 00:05:39.155 13:48:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60157 ']' 00:05:39.155 13:48:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.155 13:48:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.155 13:48:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.155 13:48:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.155 13:48:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.155 [2024-07-25 13:48:28.051106] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:39.155 [2024-07-25 13:48:28.051231] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60157 ] 00:05:39.413 [2024-07-25 13:48:28.193042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.413 [2024-07-25 13:48:28.330527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.413 [2024-07-25 13:48:28.330540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.413 [2024-07-25 13:48:28.392059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.347 13:48:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.347 13:48:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:40.347 13:48:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.347 Malloc0 00:05:40.347 13:48:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.915 Malloc1 00:05:40.915 13:48:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.915 /dev/nbd0 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.915 1+0 records in 00:05:40.915 1+0 records out 00:05:40.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281928 s, 14.5 MB/s 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.915 13:48:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.915 13:48:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.175 /dev/nbd1 00:05:41.175 13:48:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.175 13:48:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.175 1+0 records in 00:05:41.175 1+0 records out 00:05:41.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415505 s, 9.9 MB/s 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:41.175 13:48:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:41.175 13:48:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.175 13:48:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.175 13:48:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.175 13:48:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.175 13:48:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.433 13:48:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.433 { 00:05:41.433 "nbd_device": "/dev/nbd0", 00:05:41.433 "bdev_name": "Malloc0" 00:05:41.433 }, 00:05:41.433 { 00:05:41.433 "nbd_device": "/dev/nbd1", 00:05:41.433 "bdev_name": "Malloc1" 00:05:41.433 } 00:05:41.433 ]' 00:05:41.434 13:48:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.434 { 00:05:41.434 "nbd_device": "/dev/nbd0", 00:05:41.434 "bdev_name": "Malloc0" 00:05:41.434 }, 00:05:41.434 { 00:05:41.434 "nbd_device": "/dev/nbd1", 00:05:41.434 "bdev_name": "Malloc1" 00:05:41.434 } 00:05:41.434 ]' 00:05:41.434 13:48:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.692 /dev/nbd1' 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.692 /dev/nbd1' 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.692 256+0 records in 00:05:41.692 256+0 records out 00:05:41.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581862 s, 180 MB/s 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.692 256+0 records in 00:05:41.692 256+0 records out 00:05:41.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223171 s, 47.0 MB/s 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.692 256+0 records in 00:05:41.692 256+0 records out 00:05:41.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274549 s, 38.2 MB/s 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.692 13:48:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.951 13:48:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.209 13:48:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.467 13:48:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.468 13:48:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.468 13:48:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.468 13:48:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.726 13:48:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.984 [2024-07-25 13:48:31.951217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.242 [2024-07-25 13:48:32.036266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.242 [2024-07-25 13:48:32.036277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.242 [2024-07-25 13:48:32.093834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.242 [2024-07-25 13:48:32.093949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.242 [2024-07-25 13:48:32.093962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.772 13:48:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.772 spdk_app_start Round 1 00:05:45.772 13:48:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.772 13:48:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60157 /var/tmp/spdk-nbd.sock 00:05:45.772 13:48:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60157 ']' 00:05:45.772 13:48:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.772 13:48:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.772 13:48:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.772 13:48:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.772 13:48:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.029 13:48:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.029 13:48:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:46.029 13:48:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.287 Malloc0 00:05:46.287 13:48:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.546 Malloc1 00:05:46.546 13:48:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.546 13:48:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.805 /dev/nbd0 00:05:47.064 13:48:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.064 13:48:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.064 1+0 records in 00:05:47.064 1+0 records out 00:05:47.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199087 s, 20.6 MB/s 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.064 13:48:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:47.064 13:48:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.064 13:48:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.064 13:48:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.324 /dev/nbd1 00:05:47.324 13:48:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.324 13:48:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.324 1+0 records in 00:05:47.324 1+0 records out 00:05:47.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255854 s, 16.0 MB/s 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.324 13:48:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:47.324 13:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.324 13:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.324 13:48:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.324 13:48:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.324 13:48:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.584 { 00:05:47.584 "nbd_device": "/dev/nbd0", 00:05:47.584 "bdev_name": "Malloc0" 00:05:47.584 }, 00:05:47.584 { 00:05:47.584 "nbd_device": "/dev/nbd1", 00:05:47.584 "bdev_name": "Malloc1" 00:05:47.584 } 00:05:47.584 ]' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.584 { 00:05:47.584 "nbd_device": "/dev/nbd0", 00:05:47.584 "bdev_name": "Malloc0" 00:05:47.584 }, 00:05:47.584 { 00:05:47.584 "nbd_device": "/dev/nbd1", 00:05:47.584 "bdev_name": "Malloc1" 00:05:47.584 } 00:05:47.584 ]' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.584 /dev/nbd1' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.584 /dev/nbd1' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.584 256+0 records in 00:05:47.584 256+0 records out 00:05:47.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692389 s, 151 MB/s 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.584 256+0 records in 00:05:47.584 256+0 records out 00:05:47.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244585 s, 42.9 MB/s 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.584 256+0 records in 00:05:47.584 256+0 records out 00:05:47.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253359 s, 41.4 MB/s 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.584 13:48:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.843 13:48:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.102 13:48:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.359 13:48:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.359 13:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.359 13:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.616 13:48:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.616 13:48:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.874 13:48:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.131 [2024-07-25 13:48:37.911931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.131 [2024-07-25 13:48:38.016544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.131 [2024-07-25 13:48:38.016555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.131 [2024-07-25 13:48:38.078688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.131 [2024-07-25 13:48:38.078768] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.131 [2024-07-25 13:48:38.078782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.416 13:48:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.416 spdk_app_start Round 2 00:05:52.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.416 13:48:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:52.416 13:48:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60157 /var/tmp/spdk-nbd.sock 00:05:52.416 13:48:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60157 ']' 00:05:52.416 13:48:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.416 13:48:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.416 13:48:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.416 13:48:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.416 13:48:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.416 13:48:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.416 13:48:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:52.416 13:48:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.416 Malloc0 00:05:52.416 13:48:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.674 Malloc1 00:05:52.675 13:48:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.675 13:48:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.934 /dev/nbd0 00:05:52.934 13:48:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.934 13:48:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.934 1+0 records in 00:05:52.934 1+0 records out 00:05:52.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372053 s, 11.0 MB/s 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.934 13:48:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.934 13:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.934 13:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.934 13:48:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.193 /dev/nbd1 00:05:53.193 13:48:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.193 13:48:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.193 1+0 records in 00:05:53.193 1+0 records out 00:05:53.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332629 s, 12.3 MB/s 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:53.193 13:48:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:53.193 13:48:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.193 13:48:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.193 13:48:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.193 13:48:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.193 13:48:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.451 13:48:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.451 { 00:05:53.451 "nbd_device": "/dev/nbd0", 00:05:53.451 "bdev_name": "Malloc0" 00:05:53.451 }, 00:05:53.451 { 00:05:53.451 "nbd_device": "/dev/nbd1", 00:05:53.451 "bdev_name": "Malloc1" 00:05:53.451 } 00:05:53.451 ]' 00:05:53.451 13:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.451 { 00:05:53.451 "nbd_device": "/dev/nbd0", 00:05:53.451 "bdev_name": "Malloc0" 00:05:53.451 }, 00:05:53.451 { 00:05:53.451 "nbd_device": "/dev/nbd1", 00:05:53.451 "bdev_name": "Malloc1" 00:05:53.451 } 00:05:53.451 ]' 00:05:53.451 13:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.451 13:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.451 /dev/nbd1' 00:05:53.451 13:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.451 /dev/nbd1' 00:05:53.451 13:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.451 13:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.452 13:48:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.710 256+0 records in 00:05:53.710 256+0 records out 00:05:53.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00781201 s, 134 MB/s 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.710 256+0 records in 00:05:53.710 256+0 records out 00:05:53.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023901 s, 43.9 MB/s 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.710 256+0 records in 00:05:53.710 256+0 records out 00:05:53.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285184 s, 36.8 MB/s 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.710 13:48:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.969 13:48:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.227 13:48:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.227 13:48:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.227 13:48:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.227 13:48:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.227 13:48:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.227 13:48:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.227 13:48:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.228 13:48:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.228 13:48:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.228 13:48:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.228 13:48:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.486 13:48:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.486 13:48:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.745 13:48:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.011 [2024-07-25 13:48:43.896000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.011 [2024-07-25 13:48:44.000507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.011 [2024-07-25 13:48:44.000517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.295 [2024-07-25 13:48:44.062761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.295 [2024-07-25 13:48:44.062884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.295 [2024-07-25 13:48:44.062897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.830 13:48:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60157 /var/tmp/spdk-nbd.sock 00:05:57.830 13:48:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60157 ']' 00:05:57.830 13:48:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.830 13:48:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.830 13:48:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.830 13:48:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.830 13:48:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.089 13:48:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.089 13:48:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:58.089 13:48:46 event.app_repeat -- event/event.sh@39 -- # killprocess 60157 00:05:58.089 13:48:46 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60157 ']' 00:05:58.089 13:48:46 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60157 00:05:58.089 13:48:46 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:58.089 13:48:46 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.089 13:48:46 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60157 00:05:58.089 killing process with pid 60157 00:05:58.089 13:48:47 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.089 13:48:47 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.089 13:48:47 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60157' 00:05:58.089 13:48:47 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60157 00:05:58.089 13:48:47 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60157 00:05:58.348 spdk_app_start is called in Round 0. 00:05:58.348 Shutdown signal received, stop current app iteration 00:05:58.348 Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 reinitialization... 00:05:58.348 spdk_app_start is called in Round 1. 00:05:58.348 Shutdown signal received, stop current app iteration 00:05:58.348 Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 reinitialization... 00:05:58.348 spdk_app_start is called in Round 2. 00:05:58.348 Shutdown signal received, stop current app iteration 00:05:58.348 Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 reinitialization... 00:05:58.348 spdk_app_start is called in Round 3. 00:05:58.348 Shutdown signal received, stop current app iteration 00:05:58.348 ************************************ 00:05:58.348 END TEST app_repeat 00:05:58.348 ************************************ 00:05:58.348 13:48:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:58.348 13:48:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:58.348 00:05:58.348 real 0m19.229s 00:05:58.348 user 0m43.000s 00:05:58.348 sys 0m2.978s 00:05:58.348 13:48:47 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.348 13:48:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.348 13:48:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:58.348 13:48:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:58.348 13:48:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.348 13:48:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.348 13:48:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.348 ************************************ 00:05:58.348 START TEST cpu_locks 00:05:58.348 ************************************ 00:05:58.348 13:48:47 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:58.348 * Looking for test storage... 00:05:58.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.607 13:48:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:58.607 13:48:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:58.607 13:48:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:58.607 13:48:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:58.607 13:48:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.607 13:48:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.607 13:48:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.607 ************************************ 00:05:58.607 START TEST default_locks 00:05:58.607 ************************************ 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60595 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60595 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60595 ']' 00:05:58.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.607 13:48:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.607 [2024-07-25 13:48:47.460855] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:05:58.607 [2024-07-25 13:48:47.460960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60595 ] 00:05:58.607 [2024-07-25 13:48:47.601571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.866 [2024-07-25 13:48:47.734121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.866 [2024-07-25 13:48:47.791930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.804 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.804 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:59.804 13:48:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60595 00:05:59.804 13:48:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60595 00:05:59.804 13:48:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60595 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60595 ']' 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60595 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60595 00:06:00.063 killing process with pid 60595 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60595' 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60595 00:06:00.063 13:48:48 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60595 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60595 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60595 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60595 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60595 ']' 00:06:00.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.631 ERROR: process (pid: 60595) is no longer running 00:06:00.631 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60595) - No such process 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.631 00:06:00.631 real 0m2.009s 00:06:00.631 user 0m2.192s 00:06:00.631 sys 0m0.613s 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.631 ************************************ 00:06:00.631 END TEST default_locks 00:06:00.631 ************************************ 00:06:00.631 13:48:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.631 13:48:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:00.631 13:48:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.631 13:48:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.631 13:48:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.631 ************************************ 00:06:00.631 START TEST default_locks_via_rpc 00:06:00.631 ************************************ 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60647 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60647 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60647 ']' 00:06:00.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.631 13:48:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.631 [2024-07-25 13:48:49.527156] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:00.631 [2024-07-25 13:48:49.528133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:06:00.891 [2024-07-25 13:48:49.665572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.891 [2024-07-25 13:48:49.786308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.891 [2024-07-25 13:48:49.841722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.826 13:48:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.826 13:48:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.826 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.826 13:48:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.826 13:48:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.826 13:48:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60647 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60647 00:06:01.827 13:48:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60647 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60647 ']' 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60647 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60647 00:06:02.086 killing process with pid 60647 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60647' 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60647 00:06:02.086 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60647 00:06:02.654 ************************************ 00:06:02.654 END TEST default_locks_via_rpc 00:06:02.654 ************************************ 00:06:02.654 00:06:02.654 real 0m2.016s 00:06:02.654 user 0m2.204s 00:06:02.654 sys 0m0.589s 00:06:02.654 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.654 13:48:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.654 13:48:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.654 13:48:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.654 13:48:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.654 13:48:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.654 ************************************ 00:06:02.654 START TEST non_locking_app_on_locked_coremask 00:06:02.654 ************************************ 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:02.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60698 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60698 /var/tmp/spdk.sock 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60698 ']' 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.654 13:48:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.654 [2024-07-25 13:48:51.591585] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:02.655 [2024-07-25 13:48:51.591890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60698 ] 00:06:02.915 [2024-07-25 13:48:51.731320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.915 [2024-07-25 13:48:51.851743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.915 [2024-07-25 13:48:51.907024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60714 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60714 /var/tmp/spdk2.sock 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60714 ']' 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.852 13:48:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.852 [2024-07-25 13:48:52.635804] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:03.852 [2024-07-25 13:48:52.636110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60714 ] 00:06:03.852 [2024-07-25 13:48:52.782871] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.852 [2024-07-25 13:48:52.782956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.111 [2024-07-25 13:48:53.020555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.111 [2024-07-25 13:48:53.140920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.678 13:48:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.678 13:48:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.678 13:48:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60698 00:06:04.678 13:48:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60698 00:06:04.678 13:48:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60698 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60698 ']' 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60698 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60698 00:06:05.613 killing process with pid 60698 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60698' 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60698 00:06:05.613 13:48:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60698 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60714 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60714 ']' 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60714 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60714 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.549 killing process with pid 60714 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60714' 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60714 00:06:06.549 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60714 00:06:06.959 ************************************ 00:06:06.959 END TEST non_locking_app_on_locked_coremask 00:06:06.959 ************************************ 00:06:06.959 00:06:06.959 real 0m4.237s 00:06:06.959 user 0m4.699s 00:06:06.959 sys 0m1.176s 00:06:06.959 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.959 13:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.959 13:48:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:06.959 13:48:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.959 13:48:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.959 13:48:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.959 ************************************ 00:06:06.959 START TEST locking_app_on_unlocked_coremask 00:06:06.959 ************************************ 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60781 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60781 /var/tmp/spdk.sock 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60781 ']' 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.959 13:48:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.960 [2024-07-25 13:48:55.881227] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:06.960 [2024-07-25 13:48:55.881554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60781 ] 00:06:07.220 [2024-07-25 13:48:56.021119] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.220 [2024-07-25 13:48:56.021162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.220 [2024-07-25 13:48:56.138492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.220 [2024-07-25 13:48:56.198368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60797 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60797 /var/tmp/spdk2.sock 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60797 ']' 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.156 13:48:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.156 [2024-07-25 13:48:56.897826] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:08.156 [2024-07-25 13:48:56.898199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60797 ] 00:06:08.156 [2024-07-25 13:48:57.042166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.415 [2024-07-25 13:48:57.285048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.415 [2024-07-25 13:48:57.400754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.982 13:48:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.982 13:48:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.982 13:48:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60797 00:06:08.982 13:48:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60797 00:06:08.982 13:48:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60781 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60781 ']' 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60781 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60781 00:06:09.918 killing process with pid 60781 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60781' 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60781 00:06:09.918 13:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60781 00:06:10.485 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60797 00:06:10.485 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60797 ']' 00:06:10.485 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60797 00:06:10.742 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.742 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.742 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60797 00:06:10.742 killing process with pid 60797 00:06:10.742 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.742 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.742 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60797' 00:06:10.742 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60797 00:06:10.742 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60797 00:06:11.000 00:06:11.000 real 0m4.145s 00:06:11.000 user 0m4.555s 00:06:11.000 sys 0m1.123s 00:06:11.000 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.000 13:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.000 ************************************ 00:06:11.000 END TEST locking_app_on_unlocked_coremask 00:06:11.000 ************************************ 00:06:11.000 13:48:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:11.001 13:49:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.001 13:49:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.001 13:49:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.001 ************************************ 00:06:11.001 START TEST locking_app_on_locked_coremask 00:06:11.001 ************************************ 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60864 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60864 /var/tmp/spdk.sock 00:06:11.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60864 ']' 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.001 13:49:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.259 [2024-07-25 13:49:00.074630] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:11.259 [2024-07-25 13:49:00.074899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60864 ] 00:06:11.259 [2024-07-25 13:49:00.215153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.518 [2024-07-25 13:49:00.319974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.518 [2024-07-25 13:49:00.377017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60880 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60880 /var/tmp/spdk2.sock 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60880 /var/tmp/spdk2.sock 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.085 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:12.086 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.086 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60880 /var/tmp/spdk2.sock 00:06:12.086 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60880 ']' 00:06:12.086 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.086 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.086 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.086 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.086 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.345 [2024-07-25 13:49:01.158548] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:12.345 [2024-07-25 13:49:01.159026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60880 ] 00:06:12.345 [2024-07-25 13:49:01.303550] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60864 has claimed it. 00:06:12.345 [2024-07-25 13:49:01.303669] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.912 ERROR: process (pid: 60880) is no longer running 00:06:12.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60880) - No such process 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60864 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60864 00:06:12.912 13:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60864 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60864 ']' 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60864 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60864 00:06:13.479 killing process with pid 60864 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60864' 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60864 00:06:13.479 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60864 00:06:13.736 00:06:13.736 real 0m2.733s 00:06:13.736 user 0m3.169s 00:06:13.736 sys 0m0.684s 00:06:13.736 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.736 ************************************ 00:06:13.736 END TEST locking_app_on_locked_coremask 00:06:13.736 ************************************ 00:06:13.736 13:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.994 13:49:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:13.994 13:49:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.994 13:49:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.994 13:49:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.994 ************************************ 00:06:13.994 START TEST locking_overlapped_coremask 00:06:13.994 ************************************ 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60931 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60931 /var/tmp/spdk.sock 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60931 ']' 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.994 13:49:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.994 [2024-07-25 13:49:02.848546] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:13.994 [2024-07-25 13:49:02.848636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60931 ] 00:06:13.994 [2024-07-25 13:49:02.982887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.252 [2024-07-25 13:49:03.097828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.252 [2024-07-25 13:49:03.097926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.252 [2024-07-25 13:49:03.097919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.252 [2024-07-25 13:49:03.154484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60949 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60949 /var/tmp/spdk2.sock 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60949 /var/tmp/spdk2.sock 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60949 /var/tmp/spdk2.sock 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60949 ']' 00:06:14.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.820 13:49:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.077 [2024-07-25 13:49:03.888519] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:15.077 [2024-07-25 13:49:03.888599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60949 ] 00:06:15.077 [2024-07-25 13:49:04.038521] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60931 has claimed it. 00:06:15.077 [2024-07-25 13:49:04.038584] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.643 ERROR: process (pid: 60949) is no longer running 00:06:15.643 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60949) - No such process 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60931 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60931 ']' 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60931 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60931 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60931' 00:06:15.643 killing process with pid 60931 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60931 00:06:15.643 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60931 00:06:16.209 00:06:16.209 real 0m2.173s 00:06:16.209 user 0m5.959s 00:06:16.209 sys 0m0.458s 00:06:16.209 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.209 ************************************ 00:06:16.209 END TEST locking_overlapped_coremask 00:06:16.209 ************************************ 00:06:16.209 13:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.209 13:49:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:16.209 13:49:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.209 13:49:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.209 13:49:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.209 ************************************ 00:06:16.209 START TEST locking_overlapped_coremask_via_rpc 00:06:16.209 ************************************ 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60989 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60989 /var/tmp/spdk.sock 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60989 ']' 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.209 13:49:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.209 [2024-07-25 13:49:05.085544] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:16.209 [2024-07-25 13:49:05.085643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60989 ] 00:06:16.209 [2024-07-25 13:49:05.220607] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.209 [2024-07-25 13:49:05.220646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.468 [2024-07-25 13:49:05.317890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.468 [2024-07-25 13:49:05.318016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.468 [2024-07-25 13:49:05.318022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.468 [2024-07-25 13:49:05.376576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61007 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61007 /var/tmp/spdk2.sock 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61007 ']' 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.035 13:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.293 [2024-07-25 13:49:06.070298] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:17.293 [2024-07-25 13:49:06.070390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61007 ] 00:06:17.293 [2024-07-25 13:49:06.211667] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.293 [2024-07-25 13:49:06.211725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.552 [2024-07-25 13:49:06.430067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.552 [2024-07-25 13:49:06.433463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.552 [2024-07-25 13:49:06.433463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.552 [2024-07-25 13:49:06.546309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.123 [2024-07-25 13:49:07.093419] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60989 has claimed it. 00:06:18.123 request: 00:06:18.123 { 00:06:18.123 "method": "framework_enable_cpumask_locks", 00:06:18.123 "req_id": 1 00:06:18.123 } 00:06:18.123 Got JSON-RPC error response 00:06:18.123 response: 00:06:18.123 { 00:06:18.123 "code": -32603, 00:06:18.123 "message": "Failed to claim CPU core: 2" 00:06:18.123 } 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60989 /var/tmp/spdk.sock 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60989 ']' 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.123 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61007 /var/tmp/spdk2.sock 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61007 ']' 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.415 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.673 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.673 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.673 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.673 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.673 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.673 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.673 ************************************ 00:06:18.673 END TEST locking_overlapped_coremask_via_rpc 00:06:18.673 ************************************ 00:06:18.673 00:06:18.673 real 0m2.626s 00:06:18.673 user 0m1.344s 00:06:18.673 sys 0m0.205s 00:06:18.673 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.673 13:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.673 13:49:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.673 13:49:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60989 ]] 00:06:18.673 13:49:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60989 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60989 ']' 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60989 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60989 00:06:18.673 killing process with pid 60989 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60989' 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60989 00:06:18.673 13:49:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60989 00:06:19.239 13:49:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61007 ]] 00:06:19.239 13:49:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61007 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61007 ']' 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61007 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61007 00:06:19.239 killing process with pid 61007 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61007' 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61007 00:06:19.239 13:49:08 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61007 00:06:19.806 13:49:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.806 13:49:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:19.806 13:49:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60989 ]] 00:06:19.806 13:49:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60989 00:06:19.806 13:49:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60989 ']' 00:06:19.806 13:49:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60989 00:06:19.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60989) - No such process 00:06:19.806 Process with pid 60989 is not found 00:06:19.806 13:49:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60989 is not found' 00:06:19.806 13:49:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61007 ]] 00:06:19.806 13:49:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61007 00:06:19.806 13:49:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61007 ']' 00:06:19.806 13:49:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61007 00:06:19.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61007) - No such process 00:06:19.806 Process with pid 61007 is not found 00:06:19.806 13:49:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61007 is not found' 00:06:19.806 13:49:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.806 00:06:19.806 real 0m21.248s 00:06:19.806 user 0m36.508s 00:06:19.806 sys 0m5.734s 00:06:19.806 13:49:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.806 13:49:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.806 ************************************ 00:06:19.806 END TEST cpu_locks 00:06:19.806 ************************************ 00:06:19.806 00:06:19.806 real 0m47.994s 00:06:19.806 user 1m31.510s 00:06:19.806 sys 0m9.484s 00:06:19.806 13:49:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.806 13:49:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.806 ************************************ 00:06:19.806 END TEST event 00:06:19.806 ************************************ 00:06:19.806 13:49:08 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.806 13:49:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.806 13:49:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.806 13:49:08 -- common/autotest_common.sh@10 -- # set +x 00:06:19.806 ************************************ 00:06:19.806 START TEST thread 00:06:19.806 ************************************ 00:06:19.806 13:49:08 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.806 * Looking for test storage... 00:06:19.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:19.806 13:49:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.806 13:49:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:19.806 13:49:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.806 13:49:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.806 ************************************ 00:06:19.806 START TEST thread_poller_perf 00:06:19.806 ************************************ 00:06:19.806 13:49:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.806 [2024-07-25 13:49:08.725867] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:19.806 [2024-07-25 13:49:08.725980] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61130 ] 00:06:20.065 [2024-07-25 13:49:08.861770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.065 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.065 [2024-07-25 13:49:08.949161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.440 ====================================== 00:06:21.440 busy:2211077656 (cyc) 00:06:21.440 total_run_count: 351000 00:06:21.440 tsc_hz: 2200000000 (cyc) 00:06:21.440 ====================================== 00:06:21.440 poller_cost: 6299 (cyc), 2863 (nsec) 00:06:21.440 00:06:21.440 real 0m1.361s 00:06:21.440 user 0m1.196s 00:06:21.440 sys 0m0.060s 00:06:21.440 13:49:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.440 13:49:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.440 ************************************ 00:06:21.440 END TEST thread_poller_perf 00:06:21.440 ************************************ 00:06:21.440 13:49:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.440 13:49:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:21.440 13:49:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.440 13:49:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.440 ************************************ 00:06:21.440 START TEST thread_poller_perf 00:06:21.440 ************************************ 00:06:21.440 13:49:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.440 [2024-07-25 13:49:10.135069] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:21.440 [2024-07-25 13:49:10.135190] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61165 ] 00:06:21.440 [2024-07-25 13:49:10.268591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.440 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:21.440 [2024-07-25 13:49:10.362019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.816 ====================================== 00:06:22.816 busy:2202401314 (cyc) 00:06:22.816 total_run_count: 4606000 00:06:22.816 tsc_hz: 2200000000 (cyc) 00:06:22.816 ====================================== 00:06:22.816 poller_cost: 478 (cyc), 217 (nsec) 00:06:22.816 00:06:22.816 real 0m1.362s 00:06:22.816 user 0m1.203s 00:06:22.816 sys 0m0.052s 00:06:22.816 13:49:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.816 13:49:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.816 ************************************ 00:06:22.816 END TEST thread_poller_perf 00:06:22.816 ************************************ 00:06:22.816 13:49:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:22.816 00:06:22.816 real 0m2.898s 00:06:22.816 user 0m2.454s 00:06:22.816 sys 0m0.224s 00:06:22.816 13:49:11 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.816 ************************************ 00:06:22.816 END TEST thread 00:06:22.816 ************************************ 00:06:22.816 13:49:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.816 13:49:11 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:22.816 13:49:11 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:22.816 13:49:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.816 13:49:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.816 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:06:22.816 ************************************ 00:06:22.816 START TEST app_cmdline 00:06:22.816 ************************************ 00:06:22.816 13:49:11 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:22.816 * Looking for test storage... 00:06:22.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:22.816 13:49:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:22.816 13:49:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61234 00:06:22.816 13:49:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61234 00:06:22.816 13:49:11 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61234 ']' 00:06:22.816 13:49:11 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.816 13:49:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:22.816 13:49:11 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.816 13:49:11 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.816 13:49:11 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.816 13:49:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.816 [2024-07-25 13:49:11.714951] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:22.816 [2024-07-25 13:49:11.715065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61234 ] 00:06:23.074 [2024-07-25 13:49:11.855189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.074 [2024-07-25 13:49:11.977989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.074 [2024-07-25 13:49:12.037987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.641 13:49:12 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.641 13:49:12 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:23.641 13:49:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:24.207 { 00:06:24.207 "version": "SPDK v24.09-pre git sha1 50fa6ca31", 00:06:24.207 "fields": { 00:06:24.207 "major": 24, 00:06:24.207 "minor": 9, 00:06:24.207 "patch": 0, 00:06:24.207 "suffix": "-pre", 00:06:24.207 "commit": "50fa6ca31" 00:06:24.207 } 00:06:24.207 } 00:06:24.207 13:49:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:24.207 13:49:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:24.207 13:49:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:24.207 13:49:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:24.207 13:49:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:24.207 13:49:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:24.207 13:49:12 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.207 13:49:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.207 13:49:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:24.207 13:49:12 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.207 13:49:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:24.207 13:49:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:24.207 13:49:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:24.207 13:49:13 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.207 request: 00:06:24.207 { 00:06:24.207 "method": "env_dpdk_get_mem_stats", 00:06:24.207 "req_id": 1 00:06:24.207 } 00:06:24.207 Got JSON-RPC error response 00:06:24.207 response: 00:06:24.207 { 00:06:24.207 "code": -32601, 00:06:24.207 "message": "Method not found" 00:06:24.207 } 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.474 13:49:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61234 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61234 ']' 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61234 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61234 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.474 killing process with pid 61234 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61234' 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@969 -- # kill 61234 00:06:24.474 13:49:13 app_cmdline -- common/autotest_common.sh@974 -- # wait 61234 00:06:24.737 00:06:24.737 real 0m2.117s 00:06:24.737 user 0m2.619s 00:06:24.737 sys 0m0.491s 00:06:24.737 13:49:13 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.737 13:49:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.737 ************************************ 00:06:24.737 END TEST app_cmdline 00:06:24.737 ************************************ 00:06:24.737 13:49:13 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.737 13:49:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.737 13:49:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.737 13:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.737 ************************************ 00:06:24.737 START TEST version 00:06:24.737 ************************************ 00:06:24.737 13:49:13 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.995 * Looking for test storage... 00:06:24.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.995 13:49:13 version -- app/version.sh@17 -- # get_header_version major 00:06:24.995 13:49:13 version -- app/version.sh@14 -- # cut -f2 00:06:24.995 13:49:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.995 13:49:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.995 13:49:13 version -- app/version.sh@17 -- # major=24 00:06:24.995 13:49:13 version -- app/version.sh@18 -- # get_header_version minor 00:06:24.995 13:49:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.995 13:49:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.995 13:49:13 version -- app/version.sh@14 -- # cut -f2 00:06:24.995 13:49:13 version -- app/version.sh@18 -- # minor=9 00:06:24.995 13:49:13 version -- app/version.sh@19 -- # get_header_version patch 00:06:24.995 13:49:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.995 13:49:13 version -- app/version.sh@14 -- # cut -f2 00:06:24.995 13:49:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.995 13:49:13 version -- app/version.sh@19 -- # patch=0 00:06:24.995 13:49:13 version -- app/version.sh@20 -- # get_header_version suffix 00:06:24.995 13:49:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.995 13:49:13 version -- app/version.sh@14 -- # cut -f2 00:06:24.995 13:49:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.995 13:49:13 version -- app/version.sh@20 -- # suffix=-pre 00:06:24.995 13:49:13 version -- app/version.sh@22 -- # version=24.9 00:06:24.995 13:49:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:24.995 13:49:13 version -- app/version.sh@28 -- # version=24.9rc0 00:06:24.996 13:49:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:24.996 13:49:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:24.996 13:49:13 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:24.996 13:49:13 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:24.996 00:06:24.996 real 0m0.143s 00:06:24.996 user 0m0.087s 00:06:24.996 sys 0m0.083s 00:06:24.996 13:49:13 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.996 13:49:13 version -- common/autotest_common.sh@10 -- # set +x 00:06:24.996 ************************************ 00:06:24.996 END TEST version 00:06:24.996 ************************************ 00:06:24.996 13:49:13 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:24.996 13:49:13 -- spdk/autotest.sh@202 -- # uname -s 00:06:24.996 13:49:13 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:24.996 13:49:13 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:24.996 13:49:13 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:06:24.996 13:49:13 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:06:24.996 13:49:13 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:24.996 13:49:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.996 13:49:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.996 13:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.996 ************************************ 00:06:24.996 START TEST spdk_dd 00:06:24.996 ************************************ 00:06:24.996 13:49:13 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:24.996 * Looking for test storage... 00:06:24.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:24.996 13:49:14 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:24.996 13:49:14 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.996 13:49:14 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.996 13:49:14 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.996 13:49:14 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.996 13:49:14 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.996 13:49:14 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.996 13:49:14 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:24.996 13:49:14 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.996 13:49:14 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:25.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:25.565 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:25.565 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:25.565 13:49:14 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:25.565 13:49:14 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:25.565 13:49:14 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:25.565 13:49:14 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:25.565 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:25.566 * spdk_dd linked to liburing 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:25.566 13:49:14 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:25.566 13:49:14 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:25.567 13:49:14 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:25.567 13:49:14 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:25.567 13:49:14 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:25.567 13:49:14 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:25.567 13:49:14 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:25.567 13:49:14 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:25.567 13:49:14 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:25.567 13:49:14 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:25.567 13:49:14 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.567 13:49:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:25.567 ************************************ 00:06:25.567 START TEST spdk_dd_basic_rw 00:06:25.567 ************************************ 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:25.567 * Looking for test storage... 00:06:25.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:25.567 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.829 ************************************ 00:06:25.829 START TEST dd_bs_lt_native_bs 00:06:25.829 ************************************ 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.829 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.830 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.830 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.830 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.830 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.830 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.830 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.830 13:49:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:25.830 [2024-07-25 13:49:14.855319] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:25.830 [2024-07-25 13:49:14.855430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61562 ] 00:06:26.098 { 00:06:26.098 "subsystems": [ 00:06:26.098 { 00:06:26.098 "subsystem": "bdev", 00:06:26.098 "config": [ 00:06:26.098 { 00:06:26.098 "params": { 00:06:26.098 "trtype": "pcie", 00:06:26.098 "traddr": "0000:00:10.0", 00:06:26.098 "name": "Nvme0" 00:06:26.098 }, 00:06:26.098 "method": "bdev_nvme_attach_controller" 00:06:26.098 }, 00:06:26.098 { 00:06:26.098 "method": "bdev_wait_for_examine" 00:06:26.098 } 00:06:26.098 ] 00:06:26.098 } 00:06:26.098 ] 00:06:26.098 } 00:06:26.098 [2024-07-25 13:49:14.995664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.098 [2024-07-25 13:49:15.099401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.369 [2024-07-25 13:49:15.159901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.369 [2024-07-25 13:49:15.270641] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:26.369 [2024-07-25 13:49:15.270720] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.628 [2024-07-25 13:49:15.403614] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.628 00:06:26.628 real 0m0.702s 00:06:26.628 user 0m0.512s 00:06:26.628 sys 0m0.166s 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:26.628 ************************************ 00:06:26.628 END TEST dd_bs_lt_native_bs 00:06:26.628 ************************************ 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.628 ************************************ 00:06:26.628 START TEST dd_rw 00:06:26.628 ************************************ 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:26.628 13:49:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.563 13:49:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:27.563 13:49:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:27.563 13:49:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.563 13:49:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.563 { 00:06:27.563 "subsystems": [ 00:06:27.563 { 00:06:27.563 "subsystem": "bdev", 00:06:27.563 "config": [ 00:06:27.563 { 00:06:27.563 "params": { 00:06:27.563 "trtype": "pcie", 00:06:27.563 "traddr": "0000:00:10.0", 00:06:27.563 "name": "Nvme0" 00:06:27.563 }, 00:06:27.563 "method": "bdev_nvme_attach_controller" 00:06:27.563 }, 00:06:27.563 { 00:06:27.564 "method": "bdev_wait_for_examine" 00:06:27.564 } 00:06:27.564 ] 00:06:27.564 } 00:06:27.564 ] 00:06:27.564 } 00:06:27.564 [2024-07-25 13:49:16.316419] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:27.564 [2024-07-25 13:49:16.316522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:06:27.564 [2024-07-25 13:49:16.455877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.564 [2024-07-25 13:49:16.563071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.822 [2024-07-25 13:49:16.618417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.079  Copying: 60/60 [kB] (average 19 MBps) 00:06:28.079 00:06:28.079 13:49:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:28.079 13:49:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:28.079 13:49:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.079 13:49:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.080 [2024-07-25 13:49:16.994323] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:28.080 [2024-07-25 13:49:16.994438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61612 ] 00:06:28.080 { 00:06:28.080 "subsystems": [ 00:06:28.080 { 00:06:28.080 "subsystem": "bdev", 00:06:28.080 "config": [ 00:06:28.080 { 00:06:28.080 "params": { 00:06:28.080 "trtype": "pcie", 00:06:28.080 "traddr": "0000:00:10.0", 00:06:28.080 "name": "Nvme0" 00:06:28.080 }, 00:06:28.080 "method": "bdev_nvme_attach_controller" 00:06:28.080 }, 00:06:28.080 { 00:06:28.080 "method": "bdev_wait_for_examine" 00:06:28.080 } 00:06:28.080 ] 00:06:28.080 } 00:06:28.080 ] 00:06:28.080 } 00:06:28.338 [2024-07-25 13:49:17.130158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.338 [2024-07-25 13:49:17.240991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.338 [2024-07-25 13:49:17.293025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.596  Copying: 60/60 [kB] (average 19 MBps) 00:06:28.596 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.596 13:49:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.853 [2024-07-25 13:49:17.674253] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:28.853 [2024-07-25 13:49:17.674385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61627 ] 00:06:28.853 { 00:06:28.853 "subsystems": [ 00:06:28.853 { 00:06:28.853 "subsystem": "bdev", 00:06:28.853 "config": [ 00:06:28.853 { 00:06:28.853 "params": { 00:06:28.853 "trtype": "pcie", 00:06:28.853 "traddr": "0000:00:10.0", 00:06:28.853 "name": "Nvme0" 00:06:28.853 }, 00:06:28.853 "method": "bdev_nvme_attach_controller" 00:06:28.853 }, 00:06:28.854 { 00:06:28.854 "method": "bdev_wait_for_examine" 00:06:28.854 } 00:06:28.854 ] 00:06:28.854 } 00:06:28.854 ] 00:06:28.854 } 00:06:28.854 [2024-07-25 13:49:17.816373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.112 [2024-07-25 13:49:17.941049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.112 [2024-07-25 13:49:17.996504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.370  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:29.370 00:06:29.370 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:29.370 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:29.370 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:29.370 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:29.370 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:29.370 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:29.370 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.304 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:30.304 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:30.304 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.304 13:49:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.304 [2024-07-25 13:49:19.014895] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:30.304 [2024-07-25 13:49:19.014987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61652 ] 00:06:30.304 { 00:06:30.304 "subsystems": [ 00:06:30.304 { 00:06:30.304 "subsystem": "bdev", 00:06:30.304 "config": [ 00:06:30.304 { 00:06:30.304 "params": { 00:06:30.304 "trtype": "pcie", 00:06:30.304 "traddr": "0000:00:10.0", 00:06:30.304 "name": "Nvme0" 00:06:30.304 }, 00:06:30.304 "method": "bdev_nvme_attach_controller" 00:06:30.304 }, 00:06:30.304 { 00:06:30.305 "method": "bdev_wait_for_examine" 00:06:30.305 } 00:06:30.305 ] 00:06:30.305 } 00:06:30.305 ] 00:06:30.305 } 00:06:30.305 [2024-07-25 13:49:19.147914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.305 [2024-07-25 13:49:19.263131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.305 [2024-07-25 13:49:19.321864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.821  Copying: 60/60 [kB] (average 58 MBps) 00:06:30.821 00:06:30.821 13:49:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:30.821 13:49:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:30.821 13:49:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.821 13:49:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.821 [2024-07-25 13:49:19.702568] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:30.821 [2024-07-25 13:49:19.702688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61665 ] 00:06:30.821 { 00:06:30.821 "subsystems": [ 00:06:30.821 { 00:06:30.821 "subsystem": "bdev", 00:06:30.821 "config": [ 00:06:30.821 { 00:06:30.821 "params": { 00:06:30.821 "trtype": "pcie", 00:06:30.821 "traddr": "0000:00:10.0", 00:06:30.821 "name": "Nvme0" 00:06:30.821 }, 00:06:30.821 "method": "bdev_nvme_attach_controller" 00:06:30.821 }, 00:06:30.821 { 00:06:30.821 "method": "bdev_wait_for_examine" 00:06:30.821 } 00:06:30.821 ] 00:06:30.821 } 00:06:30.821 ] 00:06:30.821 } 00:06:30.821 [2024-07-25 13:49:19.832757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.080 [2024-07-25 13:49:19.929690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.080 [2024-07-25 13:49:19.989210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.339  Copying: 60/60 [kB] (average 29 MBps) 00:06:31.339 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.339 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.339 [2024-07-25 13:49:20.360653] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:31.339 [2024-07-25 13:49:20.360746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:06:31.339 { 00:06:31.339 "subsystems": [ 00:06:31.339 { 00:06:31.339 "subsystem": "bdev", 00:06:31.339 "config": [ 00:06:31.339 { 00:06:31.339 "params": { 00:06:31.339 "trtype": "pcie", 00:06:31.339 "traddr": "0000:00:10.0", 00:06:31.339 "name": "Nvme0" 00:06:31.339 }, 00:06:31.339 "method": "bdev_nvme_attach_controller" 00:06:31.339 }, 00:06:31.339 { 00:06:31.339 "method": "bdev_wait_for_examine" 00:06:31.339 } 00:06:31.339 ] 00:06:31.339 } 00:06:31.339 ] 00:06:31.339 } 00:06:31.598 [2024-07-25 13:49:20.494050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.598 [2024-07-25 13:49:20.606866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.855 [2024-07-25 13:49:20.661698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.114  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:32.114 00:06:32.114 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:32.114 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:32.114 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:32.114 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:32.114 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:32.114 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:32.114 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:32.114 13:49:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.682 13:49:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:32.682 13:49:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:32.682 13:49:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.682 13:49:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.682 [2024-07-25 13:49:21.625655] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:32.682 [2024-07-25 13:49:21.625760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61704 ] 00:06:32.682 { 00:06:32.682 "subsystems": [ 00:06:32.682 { 00:06:32.682 "subsystem": "bdev", 00:06:32.682 "config": [ 00:06:32.682 { 00:06:32.682 "params": { 00:06:32.682 "trtype": "pcie", 00:06:32.682 "traddr": "0000:00:10.0", 00:06:32.682 "name": "Nvme0" 00:06:32.682 }, 00:06:32.682 "method": "bdev_nvme_attach_controller" 00:06:32.682 }, 00:06:32.682 { 00:06:32.682 "method": "bdev_wait_for_examine" 00:06:32.682 } 00:06:32.682 ] 00:06:32.682 } 00:06:32.682 ] 00:06:32.682 } 00:06:32.940 [2024-07-25 13:49:21.759138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.940 [2024-07-25 13:49:21.870276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.940 [2024-07-25 13:49:21.924215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.457  Copying: 56/56 [kB] (average 54 MBps) 00:06:33.457 00:06:33.457 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:33.457 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:33.457 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.457 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.457 [2024-07-25 13:49:22.301805] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:33.457 [2024-07-25 13:49:22.301923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61719 ] 00:06:33.457 { 00:06:33.457 "subsystems": [ 00:06:33.457 { 00:06:33.457 "subsystem": "bdev", 00:06:33.457 "config": [ 00:06:33.457 { 00:06:33.457 "params": { 00:06:33.457 "trtype": "pcie", 00:06:33.457 "traddr": "0000:00:10.0", 00:06:33.457 "name": "Nvme0" 00:06:33.457 }, 00:06:33.457 "method": "bdev_nvme_attach_controller" 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "method": "bdev_wait_for_examine" 00:06:33.457 } 00:06:33.457 ] 00:06:33.457 } 00:06:33.457 ] 00:06:33.457 } 00:06:33.457 [2024-07-25 13:49:22.441456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.715 [2024-07-25 13:49:22.529029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.715 [2024-07-25 13:49:22.586686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.974  Copying: 56/56 [kB] (average 27 MBps) 00:06:33.974 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.974 13:49:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.232 [2024-07-25 13:49:23.016112] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:34.232 [2024-07-25 13:49:23.016247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61734 ] 00:06:34.232 { 00:06:34.232 "subsystems": [ 00:06:34.232 { 00:06:34.232 "subsystem": "bdev", 00:06:34.232 "config": [ 00:06:34.232 { 00:06:34.232 "params": { 00:06:34.232 "trtype": "pcie", 00:06:34.232 "traddr": "0000:00:10.0", 00:06:34.232 "name": "Nvme0" 00:06:34.232 }, 00:06:34.232 "method": "bdev_nvme_attach_controller" 00:06:34.232 }, 00:06:34.232 { 00:06:34.232 "method": "bdev_wait_for_examine" 00:06:34.232 } 00:06:34.232 ] 00:06:34.232 } 00:06:34.232 ] 00:06:34.232 } 00:06:34.232 [2024-07-25 13:49:23.156443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.232 [2024-07-25 13:49:23.236217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.490 [2024-07-25 13:49:23.289454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.748  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:34.748 00:06:34.748 13:49:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:34.748 13:49:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:34.748 13:49:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:34.748 13:49:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:34.748 13:49:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:34.748 13:49:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:34.748 13:49:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.313 13:49:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:35.313 13:49:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:35.313 13:49:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.313 13:49:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.313 [2024-07-25 13:49:24.263794] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:35.313 [2024-07-25 13:49:24.263906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61759 ] 00:06:35.313 { 00:06:35.313 "subsystems": [ 00:06:35.313 { 00:06:35.313 "subsystem": "bdev", 00:06:35.313 "config": [ 00:06:35.313 { 00:06:35.313 "params": { 00:06:35.313 "trtype": "pcie", 00:06:35.313 "traddr": "0000:00:10.0", 00:06:35.313 "name": "Nvme0" 00:06:35.313 }, 00:06:35.313 "method": "bdev_nvme_attach_controller" 00:06:35.313 }, 00:06:35.313 { 00:06:35.313 "method": "bdev_wait_for_examine" 00:06:35.313 } 00:06:35.313 ] 00:06:35.313 } 00:06:35.313 ] 00:06:35.313 } 00:06:35.571 [2024-07-25 13:49:24.392554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.571 [2024-07-25 13:49:24.497075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.571 [2024-07-25 13:49:24.553782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.086  Copying: 56/56 [kB] (average 54 MBps) 00:06:36.086 00:06:36.086 13:49:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:36.086 13:49:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:36.086 13:49:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.086 13:49:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.086 [2024-07-25 13:49:24.947624] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:36.086 [2024-07-25 13:49:24.947728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61772 ] 00:06:36.086 { 00:06:36.086 "subsystems": [ 00:06:36.086 { 00:06:36.086 "subsystem": "bdev", 00:06:36.086 "config": [ 00:06:36.086 { 00:06:36.086 "params": { 00:06:36.086 "trtype": "pcie", 00:06:36.086 "traddr": "0000:00:10.0", 00:06:36.086 "name": "Nvme0" 00:06:36.086 }, 00:06:36.086 "method": "bdev_nvme_attach_controller" 00:06:36.086 }, 00:06:36.086 { 00:06:36.086 "method": "bdev_wait_for_examine" 00:06:36.086 } 00:06:36.086 ] 00:06:36.086 } 00:06:36.086 ] 00:06:36.086 } 00:06:36.086 [2024-07-25 13:49:25.085792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.344 [2024-07-25 13:49:25.191186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.344 [2024-07-25 13:49:25.245286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.601  Copying: 56/56 [kB] (average 54 MBps) 00:06:36.601 00:06:36.601 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.601 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:36.601 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:36.601 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:36.601 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:36.601 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:36.602 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:36.602 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:36.602 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:36.602 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.602 13:49:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.859 [2024-07-25 13:49:25.651879] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:36.859 [2024-07-25 13:49:25.651974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61788 ] 00:06:36.859 { 00:06:36.859 "subsystems": [ 00:06:36.859 { 00:06:36.859 "subsystem": "bdev", 00:06:36.859 "config": [ 00:06:36.859 { 00:06:36.859 "params": { 00:06:36.859 "trtype": "pcie", 00:06:36.859 "traddr": "0000:00:10.0", 00:06:36.859 "name": "Nvme0" 00:06:36.859 }, 00:06:36.859 "method": "bdev_nvme_attach_controller" 00:06:36.859 }, 00:06:36.859 { 00:06:36.859 "method": "bdev_wait_for_examine" 00:06:36.859 } 00:06:36.859 ] 00:06:36.859 } 00:06:36.859 ] 00:06:36.859 } 00:06:36.859 [2024-07-25 13:49:25.790943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.117 [2024-07-25 13:49:25.892606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.117 [2024-07-25 13:49:25.946759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.375  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:37.375 00:06:37.375 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:37.375 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:37.375 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:37.375 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:37.375 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:37.375 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:37.375 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:37.375 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.941 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:37.941 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:37.941 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.941 13:49:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.941 { 00:06:37.941 "subsystems": [ 00:06:37.941 { 00:06:37.941 "subsystem": "bdev", 00:06:37.941 "config": [ 00:06:37.941 { 00:06:37.941 "params": { 00:06:37.941 "trtype": "pcie", 00:06:37.941 "traddr": "0000:00:10.0", 00:06:37.941 "name": "Nvme0" 00:06:37.941 }, 00:06:37.941 "method": "bdev_nvme_attach_controller" 00:06:37.941 }, 00:06:37.941 { 00:06:37.941 "method": "bdev_wait_for_examine" 00:06:37.941 } 00:06:37.941 ] 00:06:37.941 } 00:06:37.941 ] 00:06:37.941 } 00:06:37.941 [2024-07-25 13:49:26.905952] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:37.941 [2024-07-25 13:49:26.906051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61812 ] 00:06:38.200 [2024-07-25 13:49:27.044356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.200 [2024-07-25 13:49:27.149061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.200 [2024-07-25 13:49:27.205761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.716  Copying: 48/48 [kB] (average 46 MBps) 00:06:38.716 00:06:38.716 13:49:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:38.716 13:49:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:38.716 13:49:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.716 13:49:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.716 [2024-07-25 13:49:27.622398] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:38.716 [2024-07-25 13:49:27.622482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61826 ] 00:06:38.716 { 00:06:38.716 "subsystems": [ 00:06:38.716 { 00:06:38.716 "subsystem": "bdev", 00:06:38.716 "config": [ 00:06:38.716 { 00:06:38.716 "params": { 00:06:38.716 "trtype": "pcie", 00:06:38.716 "traddr": "0000:00:10.0", 00:06:38.716 "name": "Nvme0" 00:06:38.716 }, 00:06:38.716 "method": "bdev_nvme_attach_controller" 00:06:38.716 }, 00:06:38.716 { 00:06:38.716 "method": "bdev_wait_for_examine" 00:06:38.716 } 00:06:38.716 ] 00:06:38.716 } 00:06:38.716 ] 00:06:38.716 } 00:06:38.974 [2024-07-25 13:49:27.753581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.974 [2024-07-25 13:49:27.862043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.974 [2024-07-25 13:49:27.919271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.492  Copying: 48/48 [kB] (average 46 MBps) 00:06:39.492 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.492 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.492 [2024-07-25 13:49:28.334541] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:39.492 [2024-07-25 13:49:28.334648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61847 ] 00:06:39.492 { 00:06:39.492 "subsystems": [ 00:06:39.492 { 00:06:39.492 "subsystem": "bdev", 00:06:39.492 "config": [ 00:06:39.492 { 00:06:39.492 "params": { 00:06:39.492 "trtype": "pcie", 00:06:39.492 "traddr": "0000:00:10.0", 00:06:39.492 "name": "Nvme0" 00:06:39.492 }, 00:06:39.492 "method": "bdev_nvme_attach_controller" 00:06:39.492 }, 00:06:39.492 { 00:06:39.492 "method": "bdev_wait_for_examine" 00:06:39.492 } 00:06:39.492 ] 00:06:39.492 } 00:06:39.492 ] 00:06:39.492 } 00:06:39.492 [2024-07-25 13:49:28.474630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.750 [2024-07-25 13:49:28.578621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.750 [2024-07-25 13:49:28.635701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.009  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:40.009 00:06:40.009 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:40.009 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:40.009 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:40.009 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:40.009 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:40.009 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:40.009 13:49:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.575 13:49:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:40.575 13:49:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:40.575 13:49:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.575 13:49:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.575 [2024-07-25 13:49:29.568578] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:40.575 [2024-07-25 13:49:29.568699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61866 ] 00:06:40.575 { 00:06:40.575 "subsystems": [ 00:06:40.575 { 00:06:40.575 "subsystem": "bdev", 00:06:40.575 "config": [ 00:06:40.575 { 00:06:40.575 "params": { 00:06:40.575 "trtype": "pcie", 00:06:40.575 "traddr": "0000:00:10.0", 00:06:40.575 "name": "Nvme0" 00:06:40.575 }, 00:06:40.575 "method": "bdev_nvme_attach_controller" 00:06:40.575 }, 00:06:40.575 { 00:06:40.575 "method": "bdev_wait_for_examine" 00:06:40.575 } 00:06:40.575 ] 00:06:40.575 } 00:06:40.575 ] 00:06:40.575 } 00:06:40.833 [2024-07-25 13:49:29.706812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.833 [2024-07-25 13:49:29.814940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.091 [2024-07-25 13:49:29.872638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.349  Copying: 48/48 [kB] (average 46 MBps) 00:06:41.349 00:06:41.349 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:41.349 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:41.349 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.349 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.349 [2024-07-25 13:49:30.280476] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:41.349 [2024-07-25 13:49:30.280607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61885 ] 00:06:41.349 { 00:06:41.349 "subsystems": [ 00:06:41.349 { 00:06:41.349 "subsystem": "bdev", 00:06:41.349 "config": [ 00:06:41.349 { 00:06:41.349 "params": { 00:06:41.349 "trtype": "pcie", 00:06:41.349 "traddr": "0000:00:10.0", 00:06:41.349 "name": "Nvme0" 00:06:41.349 }, 00:06:41.349 "method": "bdev_nvme_attach_controller" 00:06:41.349 }, 00:06:41.349 { 00:06:41.349 "method": "bdev_wait_for_examine" 00:06:41.349 } 00:06:41.349 ] 00:06:41.349 } 00:06:41.349 ] 00:06:41.349 } 00:06:41.607 [2024-07-25 13:49:30.422050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.607 [2024-07-25 13:49:30.529120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.607 [2024-07-25 13:49:30.582701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.122  Copying: 48/48 [kB] (average 46 MBps) 00:06:42.122 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.122 13:49:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.123 { 00:06:42.123 "subsystems": [ 00:06:42.123 { 00:06:42.123 "subsystem": "bdev", 00:06:42.123 "config": [ 00:06:42.123 { 00:06:42.123 "params": { 00:06:42.123 "trtype": "pcie", 00:06:42.123 "traddr": "0000:00:10.0", 00:06:42.123 "name": "Nvme0" 00:06:42.123 }, 00:06:42.123 "method": "bdev_nvme_attach_controller" 00:06:42.123 }, 00:06:42.123 { 00:06:42.123 "method": "bdev_wait_for_examine" 00:06:42.123 } 00:06:42.123 ] 00:06:42.123 } 00:06:42.123 ] 00:06:42.123 } 00:06:42.123 [2024-07-25 13:49:30.972248] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:42.123 [2024-07-25 13:49:30.972368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61895 ] 00:06:42.123 [2024-07-25 13:49:31.108536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.379 [2024-07-25 13:49:31.226113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.379 [2024-07-25 13:49:31.280235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.636  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:42.637 00:06:42.637 00:06:42.637 real 0m16.070s 00:06:42.637 user 0m11.928s 00:06:42.637 sys 0m5.699s 00:06:42.637 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.637 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.637 ************************************ 00:06:42.637 END TEST dd_rw 00:06:42.637 ************************************ 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.895 ************************************ 00:06:42.895 START TEST dd_rw_offset 00:06:42.895 ************************************ 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=9qlcntplz6artlaq2czhm00ig1mhxo0csbtvqb15g477weie00n7xjkdb5s0nfe7kplrm6um3oawqfrbnmq6rsmiontlv69gspclm5vnwdla3up0e5ydglajgqazgqnccvpf66qxj4k0ie86uo05k6lrrrvggx19nrh2g0f8k2re00zd8sep8gvm1ilv4k1126gqino66tquitf3u98sp9pg5b89mindwb7l8vsiwg377rlx0y7aoyfvgmhshb1xqe8bytb5o7qbtrbhwnodojlwk7mofgctf2f7kql9m4xpz3r5p73mmlhyyi6z6dw69glrfs2nq7160k1vn2xcblym8ee8ifv4am3px7pbovk991vs09sa4gej2yr1gg2sdbpqctnmi5hc1sh9c4nek3l9i7dkqyufjsilupovg6918mqmd2yrf0iyvwxrucpc2m4p67i6f2jtjqqoon9n58789dxlgey0cbt4zjwewapge47u9u4snevw98du2lu9ux63qiqndfro91bfipac8wzgel3lrpm9a5rwm8b4l9z2k5otkndxmgnveijzzdx6ll5qrkundfmnhkuvj5l9si0py7bkmyjajw9yktpdjr1uviy06ti9mc68fqr5ahmjj06i3pxvaxhghgy43tigp5mnjlwot6z73uj5ks9fhjvvqpj3hn392mb0xn8zrtwrvshwdqdeqv4okmrfhhzt86hh40zmmhimpg3mbn0t5colk43rza5d1xytfz0z63kkhi3t7aogs2gg2gbh0iyotih75lvv36odyzi41k3zaul4qu2tzh911u0jjfr0r7v7k9jl0w26dk9bwfpnl2krc3kpxt43zb6jznnyc55jwjzaidcby8qujzordrzivj3jciwil7ulrk41jhojh78kd5bydeo45v44fz1b597o42hcwjxic8jrbjcux14vlo2gsnqyrqfjh9dgaqrmjb2d3gkhmg0x35cp22uke7s1kj626p3c2qfu8xm6mqrkph8alam1ocjpktwijkkaltqv0xwhtuump1iaj6taqnxysv7ivtu9gcqx8ucfcn93nphzepuuxtbn0b3ex0qnl44p4dz3j9idckd3sbychvloxbw2xuc38pv6j281c08eqyia9symjylgoi6xhmk4dvwz7x9cmdmcgw6j2ksfzihkrmlqc5qyft643utb7crwkev24imcwe9183zhsezsf93paa495sumfn5fmutax50iggu7d30qoxw4ov40d5yb4p57ga5jycuk007fbszuzl1yz8rcoljwitwgovhuekpbz24mhfo1fu1j43yvwmwf48v8y79qizx4aiscjgoict3rupmsbxs1ozy791ehliebkr82wxf3f1jbsdtjxavgd7j0yg38s8xtp9eix8buhj50m4rhv1b7z89yo3agf6d2uehno0o9x9pjkflgscbm0xtlghvredpwavrfpp4o33npjp5kqsob13a4yk8gwxn2roh6r8mchq4u9goaqca5fwu752xdbchevolnm8c7pv7hmo9diqszq8p8um9rwcwyt0h5lubfp3mjxfmdyaareljxmrs690ma8t97lgkbpyp2lf9uetlseeouspohxsuhwx5hpz4jt6tzmvwvzupwugpgagiog7wi325c3qipts05r2i1hud7dcduex1aq39bh2j17yxaosgl4s02lyxlez7sjap4z8lxvcvcxnluw8re7gdssfdvbzygdgn33jrilfm40lvn5oamqdah2qrfeqnyafr8a0jkt4co0pip07s881e2yktz6a1ooalarvjkvhx1zh4gatg50p14sef67vj3ctmaznfyyt455zs898m8rod80aw22jdghydc14qgwgnh0lc9jck1or1udllgjhufsbyfp4q3mpu4iiemvx2osiecouktt9czmrr28zknrd05rydrvgjzah9atla3q5z6asjuziws0n2diwhtk7viep6s878qa15w5t7aozucb0cxjapbc2p77iho3ryz8mb9a41p7ql6s84qomw2dzrzff3x6ojga8rbpymktujluy9b06s79wcel8ikgmdgslf6lzlr1znia1q33m1f2gw55a6nw3117072vu3g416cqong8g651dvcafrqhkl52h3a0m3layczxx98vy2l3vnrilull6qg42laq1cculm82komezcsars18dc3yytdlbz4rj6wmwg95nc89us7pw5ybwytpxzz3tds6pisq9pmo2f64srsd7vqv6bwa6mxw4e0x56mk8bd7z2563yy77vym3gvwzxs03okmxa9g9x9s2gp7vu7um5714ab2rcn7ibkeijf5lac3gac6lhcz9zbat7rwlwh8njsh6t6hy7eo6dlzrww9hvhog6hn27x1ifd7kr0uuxvgyh0rmhq1ewabzlyt8wznaaov0zvgc07yfbnogyvi2dg7ui10heado6rjpnin9vtq8xoo0k9cm43k2i0slx45uc8gqonirx7vr88ijqj2pprvhsc27yp5146zs2vn55bk0wg1eu78c8xmvs8kawhjjbnjil88eec9q9usiw5c6m5vl80p0klfvaw0y6az7sn8di5ez567zuw1p4qt9yn4fkix69n386lnpb7hj6gs2lptaccnrnhtmqrcrvsp0j09lj4cln2raq2erxc85ybn9sgnd9zfhssk8yrnbz80l5dyia3agvk92s46ahljo5trlwbskybmjjq2pkrhc3f9wn0wjyd0vi9pw27c0ovem3xvpylwjyhya45ylc37x8gqhpzk4s2iux0gxkiq8yxurcgebhs5m19xt7evz3alcwhw209gyuw1uanfkpzu1ghbpq36hv00bvkxt6plgtwpq646juc8dzq8waeor9rd7zn8u300bm63chvq7k52k67idnhmo64exx3bgkexxijjrlddj0lfls67ywn2bp37mmr49qtje4od1141pm8tz0yo1dfnzapri72lyb2j2jglrbf4jc99s3xgqn78oa940947qwy7fkrhptcuccllg3izu3vyleipitqds97nb7ywru242d7zvt8jpntm8412dpj4o6gpaco1d9fbnezea54pwx33cujr1ch2kfq1u3wr0id2d8oy594c3dyb0lc98ues3dkvw8u2hvsis8a45s795pbw7odnxr8xz4wxpxy8xzp5tszgt8sbrriel3da0yrxsu2iiq9v0bkzr3gwpicmwj3j2d7gx3n2veeqbkfsp3x5b8kbj1wbnr9e7seauen2fvets6ri4p3m3gjtwziwub7y82tnrkvw87x19ktw66lbn25ufvoqkdlqgna6s3fkoefa16o8oqhsnz9p3x7s1wqrgizs00vejna7t52a9bkr7zyumfx1plod635jh5vty85rer8jicrh8rlozjrs8zuzyxr8njvcdbeu92ed4ij8963ucl0mlzxfle0hqets8v8zyp6pkp3j04hb37sdhqk07i81j8n6c5t55izxsiwzljcmr94e77cnnogelfknldgqdayp6qkkqj2e6nn83cft3az6tjelh782xgg86n3ue608ot7txy57o9lamb0desj74k2oj1p4hwzoj4xkkzus24coklmeru989zu24pcpm1kyyfy17pg4brlij8o6jubw0wk1n5qgdrxln0desp3l19id0x0zajhiqwycd16eil3cuogxbmtvdlkkryldlh1c581taxe7cx1d51aqpn4gj0t2rkm830vtjod9xdgrc0k8zkgl6z8cowqdyhinrbekpt4v4he33xj3gi2ad3b4i2zww187gvquhji1snelqszc1ql1epwbyw53yc0q6lu2l3qu26mzua77wtbr8ldioqf5angcotjvx8zwpnr9t4tyutbwm44il15ls7g1l4wuard7pifytnt9hmhl46czyjgkaj489dhv8yzm5jvl49hfwl8op1qcvu85pdqto4rusujybfxn78583gam28jdi0pvhe839ndzzsmefqelgzzgrraaf0w862060ura3rpaqf1fx9d 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:42.895 13:49:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.895 [2024-07-25 13:49:31.793670] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:42.895 [2024-07-25 13:49:31.793801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61931 ] 00:06:42.895 { 00:06:42.895 "subsystems": [ 00:06:42.895 { 00:06:42.895 "subsystem": "bdev", 00:06:42.895 "config": [ 00:06:42.895 { 00:06:42.895 "params": { 00:06:42.895 "trtype": "pcie", 00:06:42.895 "traddr": "0000:00:10.0", 00:06:42.895 "name": "Nvme0" 00:06:42.895 }, 00:06:42.895 "method": "bdev_nvme_attach_controller" 00:06:42.895 }, 00:06:42.895 { 00:06:42.895 "method": "bdev_wait_for_examine" 00:06:42.895 } 00:06:42.895 ] 00:06:42.895 } 00:06:42.895 ] 00:06:42.895 } 00:06:43.153 [2024-07-25 13:49:31.935880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.153 [2024-07-25 13:49:32.060906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.153 [2024-07-25 13:49:32.117765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.411  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:43.411 00:06:43.675 13:49:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:43.675 13:49:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:43.675 13:49:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:43.675 13:49:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:43.675 [2024-07-25 13:49:32.498379] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:43.675 [2024-07-25 13:49:32.498476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61950 ] 00:06:43.675 { 00:06:43.675 "subsystems": [ 00:06:43.675 { 00:06:43.675 "subsystem": "bdev", 00:06:43.675 "config": [ 00:06:43.675 { 00:06:43.675 "params": { 00:06:43.675 "trtype": "pcie", 00:06:43.675 "traddr": "0000:00:10.0", 00:06:43.675 "name": "Nvme0" 00:06:43.675 }, 00:06:43.675 "method": "bdev_nvme_attach_controller" 00:06:43.675 }, 00:06:43.675 { 00:06:43.675 "method": "bdev_wait_for_examine" 00:06:43.675 } 00:06:43.675 ] 00:06:43.675 } 00:06:43.675 ] 00:06:43.675 } 00:06:43.675 [2024-07-25 13:49:32.639545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.956 [2024-07-25 13:49:32.767384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.956 [2024-07-25 13:49:32.824252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.226  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:44.226 00:06:44.226 13:49:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 9qlcntplz6artlaq2czhm00ig1mhxo0csbtvqb15g477weie00n7xjkdb5s0nfe7kplrm6um3oawqfrbnmq6rsmiontlv69gspclm5vnwdla3up0e5ydglajgqazgqnccvpf66qxj4k0ie86uo05k6lrrrvggx19nrh2g0f8k2re00zd8sep8gvm1ilv4k1126gqino66tquitf3u98sp9pg5b89mindwb7l8vsiwg377rlx0y7aoyfvgmhshb1xqe8bytb5o7qbtrbhwnodojlwk7mofgctf2f7kql9m4xpz3r5p73mmlhyyi6z6dw69glrfs2nq7160k1vn2xcblym8ee8ifv4am3px7pbovk991vs09sa4gej2yr1gg2sdbpqctnmi5hc1sh9c4nek3l9i7dkqyufjsilupovg6918mqmd2yrf0iyvwxrucpc2m4p67i6f2jtjqqoon9n58789dxlgey0cbt4zjwewapge47u9u4snevw98du2lu9ux63qiqndfro91bfipac8wzgel3lrpm9a5rwm8b4l9z2k5otkndxmgnveijzzdx6ll5qrkundfmnhkuvj5l9si0py7bkmyjajw9yktpdjr1uviy06ti9mc68fqr5ahmjj06i3pxvaxhghgy43tigp5mnjlwot6z73uj5ks9fhjvvqpj3hn392mb0xn8zrtwrvshwdqdeqv4okmrfhhzt86hh40zmmhimpg3mbn0t5colk43rza5d1xytfz0z63kkhi3t7aogs2gg2gbh0iyotih75lvv36odyzi41k3zaul4qu2tzh911u0jjfr0r7v7k9jl0w26dk9bwfpnl2krc3kpxt43zb6jznnyc55jwjzaidcby8qujzordrzivj3jciwil7ulrk41jhojh78kd5bydeo45v44fz1b597o42hcwjxic8jrbjcux14vlo2gsnqyrqfjh9dgaqrmjb2d3gkhmg0x35cp22uke7s1kj626p3c2qfu8xm6mqrkph8alam1ocjpktwijkkaltqv0xwhtuump1iaj6taqnxysv7ivtu9gcqx8ucfcn93nphzepuuxtbn0b3ex0qnl44p4dz3j9idckd3sbychvloxbw2xuc38pv6j281c08eqyia9symjylgoi6xhmk4dvwz7x9cmdmcgw6j2ksfzihkrmlqc5qyft643utb7crwkev24imcwe9183zhsezsf93paa495sumfn5fmutax50iggu7d30qoxw4ov40d5yb4p57ga5jycuk007fbszuzl1yz8rcoljwitwgovhuekpbz24mhfo1fu1j43yvwmwf48v8y79qizx4aiscjgoict3rupmsbxs1ozy791ehliebkr82wxf3f1jbsdtjxavgd7j0yg38s8xtp9eix8buhj50m4rhv1b7z89yo3agf6d2uehno0o9x9pjkflgscbm0xtlghvredpwavrfpp4o33npjp5kqsob13a4yk8gwxn2roh6r8mchq4u9goaqca5fwu752xdbchevolnm8c7pv7hmo9diqszq8p8um9rwcwyt0h5lubfp3mjxfmdyaareljxmrs690ma8t97lgkbpyp2lf9uetlseeouspohxsuhwx5hpz4jt6tzmvwvzupwugpgagiog7wi325c3qipts05r2i1hud7dcduex1aq39bh2j17yxaosgl4s02lyxlez7sjap4z8lxvcvcxnluw8re7gdssfdvbzygdgn33jrilfm40lvn5oamqdah2qrfeqnyafr8a0jkt4co0pip07s881e2yktz6a1ooalarvjkvhx1zh4gatg50p14sef67vj3ctmaznfyyt455zs898m8rod80aw22jdghydc14qgwgnh0lc9jck1or1udllgjhufsbyfp4q3mpu4iiemvx2osiecouktt9czmrr28zknrd05rydrvgjzah9atla3q5z6asjuziws0n2diwhtk7viep6s878qa15w5t7aozucb0cxjapbc2p77iho3ryz8mb9a41p7ql6s84qomw2dzrzff3x6ojga8rbpymktujluy9b06s79wcel8ikgmdgslf6lzlr1znia1q33m1f2gw55a6nw3117072vu3g416cqong8g651dvcafrqhkl52h3a0m3layczxx98vy2l3vnrilull6qg42laq1cculm82komezcsars18dc3yytdlbz4rj6wmwg95nc89us7pw5ybwytpxzz3tds6pisq9pmo2f64srsd7vqv6bwa6mxw4e0x56mk8bd7z2563yy77vym3gvwzxs03okmxa9g9x9s2gp7vu7um5714ab2rcn7ibkeijf5lac3gac6lhcz9zbat7rwlwh8njsh6t6hy7eo6dlzrww9hvhog6hn27x1ifd7kr0uuxvgyh0rmhq1ewabzlyt8wznaaov0zvgc07yfbnogyvi2dg7ui10heado6rjpnin9vtq8xoo0k9cm43k2i0slx45uc8gqonirx7vr88ijqj2pprvhsc27yp5146zs2vn55bk0wg1eu78c8xmvs8kawhjjbnjil88eec9q9usiw5c6m5vl80p0klfvaw0y6az7sn8di5ez567zuw1p4qt9yn4fkix69n386lnpb7hj6gs2lptaccnrnhtmqrcrvsp0j09lj4cln2raq2erxc85ybn9sgnd9zfhssk8yrnbz80l5dyia3agvk92s46ahljo5trlwbskybmjjq2pkrhc3f9wn0wjyd0vi9pw27c0ovem3xvpylwjyhya45ylc37x8gqhpzk4s2iux0gxkiq8yxurcgebhs5m19xt7evz3alcwhw209gyuw1uanfkpzu1ghbpq36hv00bvkxt6plgtwpq646juc8dzq8waeor9rd7zn8u300bm63chvq7k52k67idnhmo64exx3bgkexxijjrlddj0lfls67ywn2bp37mmr49qtje4od1141pm8tz0yo1dfnzapri72lyb2j2jglrbf4jc99s3xgqn78oa940947qwy7fkrhptcuccllg3izu3vyleipitqds97nb7ywru242d7zvt8jpntm8412dpj4o6gpaco1d9fbnezea54pwx33cujr1ch2kfq1u3wr0id2d8oy594c3dyb0lc98ues3dkvw8u2hvsis8a45s795pbw7odnxr8xz4wxpxy8xzp5tszgt8sbrriel3da0yrxsu2iiq9v0bkzr3gwpicmwj3j2d7gx3n2veeqbkfsp3x5b8kbj1wbnr9e7seauen2fvets6ri4p3m3gjtwziwub7y82tnrkvw87x19ktw66lbn25ufvoqkdlqgna6s3fkoefa16o8oqhsnz9p3x7s1wqrgizs00vejna7t52a9bkr7zyumfx1plod635jh5vty85rer8jicrh8rlozjrs8zuzyxr8njvcdbeu92ed4ij8963ucl0mlzxfle0hqets8v8zyp6pkp3j04hb37sdhqk07i81j8n6c5t55izxsiwzljcmr94e77cnnogelfknldgqdayp6qkkqj2e6nn83cft3az6tjelh782xgg86n3ue608ot7txy57o9lamb0desj74k2oj1p4hwzoj4xkkzus24coklmeru989zu24pcpm1kyyfy17pg4brlij8o6jubw0wk1n5qgdrxln0desp3l19id0x0zajhiqwycd16eil3cuogxbmtvdlkkryldlh1c581taxe7cx1d51aqpn4gj0t2rkm830vtjod9xdgrc0k8zkgl6z8cowqdyhinrbekpt4v4he33xj3gi2ad3b4i2zww187gvquhji1snelqszc1ql1epwbyw53yc0q6lu2l3qu26mzua77wtbr8ldioqf5angcotjvx8zwpnr9t4tyutbwm44il15ls7g1l4wuard7pifytnt9hmhl46czyjgkaj489dhv8yzm5jvl49hfwl8op1qcvu85pdqto4rusujybfxn78583gam28jdi0pvhe839ndzzsmefqelgzzgrraaf0w862060ura3rpaqf1fx9d == \9\q\l\c\n\t\p\l\z\6\a\r\t\l\a\q\2\c\z\h\m\0\0\i\g\1\m\h\x\o\0\c\s\b\t\v\q\b\1\5\g\4\7\7\w\e\i\e\0\0\n\7\x\j\k\d\b\5\s\0\n\f\e\7\k\p\l\r\m\6\u\m\3\o\a\w\q\f\r\b\n\m\q\6\r\s\m\i\o\n\t\l\v\6\9\g\s\p\c\l\m\5\v\n\w\d\l\a\3\u\p\0\e\5\y\d\g\l\a\j\g\q\a\z\g\q\n\c\c\v\p\f\6\6\q\x\j\4\k\0\i\e\8\6\u\o\0\5\k\6\l\r\r\r\v\g\g\x\1\9\n\r\h\2\g\0\f\8\k\2\r\e\0\0\z\d\8\s\e\p\8\g\v\m\1\i\l\v\4\k\1\1\2\6\g\q\i\n\o\6\6\t\q\u\i\t\f\3\u\9\8\s\p\9\p\g\5\b\8\9\m\i\n\d\w\b\7\l\8\v\s\i\w\g\3\7\7\r\l\x\0\y\7\a\o\y\f\v\g\m\h\s\h\b\1\x\q\e\8\b\y\t\b\5\o\7\q\b\t\r\b\h\w\n\o\d\o\j\l\w\k\7\m\o\f\g\c\t\f\2\f\7\k\q\l\9\m\4\x\p\z\3\r\5\p\7\3\m\m\l\h\y\y\i\6\z\6\d\w\6\9\g\l\r\f\s\2\n\q\7\1\6\0\k\1\v\n\2\x\c\b\l\y\m\8\e\e\8\i\f\v\4\a\m\3\p\x\7\p\b\o\v\k\9\9\1\v\s\0\9\s\a\4\g\e\j\2\y\r\1\g\g\2\s\d\b\p\q\c\t\n\m\i\5\h\c\1\s\h\9\c\4\n\e\k\3\l\9\i\7\d\k\q\y\u\f\j\s\i\l\u\p\o\v\g\6\9\1\8\m\q\m\d\2\y\r\f\0\i\y\v\w\x\r\u\c\p\c\2\m\4\p\6\7\i\6\f\2\j\t\j\q\q\o\o\n\9\n\5\8\7\8\9\d\x\l\g\e\y\0\c\b\t\4\z\j\w\e\w\a\p\g\e\4\7\u\9\u\4\s\n\e\v\w\9\8\d\u\2\l\u\9\u\x\6\3\q\i\q\n\d\f\r\o\9\1\b\f\i\p\a\c\8\w\z\g\e\l\3\l\r\p\m\9\a\5\r\w\m\8\b\4\l\9\z\2\k\5\o\t\k\n\d\x\m\g\n\v\e\i\j\z\z\d\x\6\l\l\5\q\r\k\u\n\d\f\m\n\h\k\u\v\j\5\l\9\s\i\0\p\y\7\b\k\m\y\j\a\j\w\9\y\k\t\p\d\j\r\1\u\v\i\y\0\6\t\i\9\m\c\6\8\f\q\r\5\a\h\m\j\j\0\6\i\3\p\x\v\a\x\h\g\h\g\y\4\3\t\i\g\p\5\m\n\j\l\w\o\t\6\z\7\3\u\j\5\k\s\9\f\h\j\v\v\q\p\j\3\h\n\3\9\2\m\b\0\x\n\8\z\r\t\w\r\v\s\h\w\d\q\d\e\q\v\4\o\k\m\r\f\h\h\z\t\8\6\h\h\4\0\z\m\m\h\i\m\p\g\3\m\b\n\0\t\5\c\o\l\k\4\3\r\z\a\5\d\1\x\y\t\f\z\0\z\6\3\k\k\h\i\3\t\7\a\o\g\s\2\g\g\2\g\b\h\0\i\y\o\t\i\h\7\5\l\v\v\3\6\o\d\y\z\i\4\1\k\3\z\a\u\l\4\q\u\2\t\z\h\9\1\1\u\0\j\j\f\r\0\r\7\v\7\k\9\j\l\0\w\2\6\d\k\9\b\w\f\p\n\l\2\k\r\c\3\k\p\x\t\4\3\z\b\6\j\z\n\n\y\c\5\5\j\w\j\z\a\i\d\c\b\y\8\q\u\j\z\o\r\d\r\z\i\v\j\3\j\c\i\w\i\l\7\u\l\r\k\4\1\j\h\o\j\h\7\8\k\d\5\b\y\d\e\o\4\5\v\4\4\f\z\1\b\5\9\7\o\4\2\h\c\w\j\x\i\c\8\j\r\b\j\c\u\x\1\4\v\l\o\2\g\s\n\q\y\r\q\f\j\h\9\d\g\a\q\r\m\j\b\2\d\3\g\k\h\m\g\0\x\3\5\c\p\2\2\u\k\e\7\s\1\k\j\6\2\6\p\3\c\2\q\f\u\8\x\m\6\m\q\r\k\p\h\8\a\l\a\m\1\o\c\j\p\k\t\w\i\j\k\k\a\l\t\q\v\0\x\w\h\t\u\u\m\p\1\i\a\j\6\t\a\q\n\x\y\s\v\7\i\v\t\u\9\g\c\q\x\8\u\c\f\c\n\9\3\n\p\h\z\e\p\u\u\x\t\b\n\0\b\3\e\x\0\q\n\l\4\4\p\4\d\z\3\j\9\i\d\c\k\d\3\s\b\y\c\h\v\l\o\x\b\w\2\x\u\c\3\8\p\v\6\j\2\8\1\c\0\8\e\q\y\i\a\9\s\y\m\j\y\l\g\o\i\6\x\h\m\k\4\d\v\w\z\7\x\9\c\m\d\m\c\g\w\6\j\2\k\s\f\z\i\h\k\r\m\l\q\c\5\q\y\f\t\6\4\3\u\t\b\7\c\r\w\k\e\v\2\4\i\m\c\w\e\9\1\8\3\z\h\s\e\z\s\f\9\3\p\a\a\4\9\5\s\u\m\f\n\5\f\m\u\t\a\x\5\0\i\g\g\u\7\d\3\0\q\o\x\w\4\o\v\4\0\d\5\y\b\4\p\5\7\g\a\5\j\y\c\u\k\0\0\7\f\b\s\z\u\z\l\1\y\z\8\r\c\o\l\j\w\i\t\w\g\o\v\h\u\e\k\p\b\z\2\4\m\h\f\o\1\f\u\1\j\4\3\y\v\w\m\w\f\4\8\v\8\y\7\9\q\i\z\x\4\a\i\s\c\j\g\o\i\c\t\3\r\u\p\m\s\b\x\s\1\o\z\y\7\9\1\e\h\l\i\e\b\k\r\8\2\w\x\f\3\f\1\j\b\s\d\t\j\x\a\v\g\d\7\j\0\y\g\3\8\s\8\x\t\p\9\e\i\x\8\b\u\h\j\5\0\m\4\r\h\v\1\b\7\z\8\9\y\o\3\a\g\f\6\d\2\u\e\h\n\o\0\o\9\x\9\p\j\k\f\l\g\s\c\b\m\0\x\t\l\g\h\v\r\e\d\p\w\a\v\r\f\p\p\4\o\3\3\n\p\j\p\5\k\q\s\o\b\1\3\a\4\y\k\8\g\w\x\n\2\r\o\h\6\r\8\m\c\h\q\4\u\9\g\o\a\q\c\a\5\f\w\u\7\5\2\x\d\b\c\h\e\v\o\l\n\m\8\c\7\p\v\7\h\m\o\9\d\i\q\s\z\q\8\p\8\u\m\9\r\w\c\w\y\t\0\h\5\l\u\b\f\p\3\m\j\x\f\m\d\y\a\a\r\e\l\j\x\m\r\s\6\9\0\m\a\8\t\9\7\l\g\k\b\p\y\p\2\l\f\9\u\e\t\l\s\e\e\o\u\s\p\o\h\x\s\u\h\w\x\5\h\p\z\4\j\t\6\t\z\m\v\w\v\z\u\p\w\u\g\p\g\a\g\i\o\g\7\w\i\3\2\5\c\3\q\i\p\t\s\0\5\r\2\i\1\h\u\d\7\d\c\d\u\e\x\1\a\q\3\9\b\h\2\j\1\7\y\x\a\o\s\g\l\4\s\0\2\l\y\x\l\e\z\7\s\j\a\p\4\z\8\l\x\v\c\v\c\x\n\l\u\w\8\r\e\7\g\d\s\s\f\d\v\b\z\y\g\d\g\n\3\3\j\r\i\l\f\m\4\0\l\v\n\5\o\a\m\q\d\a\h\2\q\r\f\e\q\n\y\a\f\r\8\a\0\j\k\t\4\c\o\0\p\i\p\0\7\s\8\8\1\e\2\y\k\t\z\6\a\1\o\o\a\l\a\r\v\j\k\v\h\x\1\z\h\4\g\a\t\g\5\0\p\1\4\s\e\f\6\7\v\j\3\c\t\m\a\z\n\f\y\y\t\4\5\5\z\s\8\9\8\m\8\r\o\d\8\0\a\w\2\2\j\d\g\h\y\d\c\1\4\q\g\w\g\n\h\0\l\c\9\j\c\k\1\o\r\1\u\d\l\l\g\j\h\u\f\s\b\y\f\p\4\q\3\m\p\u\4\i\i\e\m\v\x\2\o\s\i\e\c\o\u\k\t\t\9\c\z\m\r\r\2\8\z\k\n\r\d\0\5\r\y\d\r\v\g\j\z\a\h\9\a\t\l\a\3\q\5\z\6\a\s\j\u\z\i\w\s\0\n\2\d\i\w\h\t\k\7\v\i\e\p\6\s\8\7\8\q\a\1\5\w\5\t\7\a\o\z\u\c\b\0\c\x\j\a\p\b\c\2\p\7\7\i\h\o\3\r\y\z\8\m\b\9\a\4\1\p\7\q\l\6\s\8\4\q\o\m\w\2\d\z\r\z\f\f\3\x\6\o\j\g\a\8\r\b\p\y\m\k\t\u\j\l\u\y\9\b\0\6\s\7\9\w\c\e\l\8\i\k\g\m\d\g\s\l\f\6\l\z\l\r\1\z\n\i\a\1\q\3\3\m\1\f\2\g\w\5\5\a\6\n\w\3\1\1\7\0\7\2\v\u\3\g\4\1\6\c\q\o\n\g\8\g\6\5\1\d\v\c\a\f\r\q\h\k\l\5\2\h\3\a\0\m\3\l\a\y\c\z\x\x\9\8\v\y\2\l\3\v\n\r\i\l\u\l\l\6\q\g\4\2\l\a\q\1\c\c\u\l\m\8\2\k\o\m\e\z\c\s\a\r\s\1\8\d\c\3\y\y\t\d\l\b\z\4\r\j\6\w\m\w\g\9\5\n\c\8\9\u\s\7\p\w\5\y\b\w\y\t\p\x\z\z\3\t\d\s\6\p\i\s\q\9\p\m\o\2\f\6\4\s\r\s\d\7\v\q\v\6\b\w\a\6\m\x\w\4\e\0\x\5\6\m\k\8\b\d\7\z\2\5\6\3\y\y\7\7\v\y\m\3\g\v\w\z\x\s\0\3\o\k\m\x\a\9\g\9\x\9\s\2\g\p\7\v\u\7\u\m\5\7\1\4\a\b\2\r\c\n\7\i\b\k\e\i\j\f\5\l\a\c\3\g\a\c\6\l\h\c\z\9\z\b\a\t\7\r\w\l\w\h\8\n\j\s\h\6\t\6\h\y\7\e\o\6\d\l\z\r\w\w\9\h\v\h\o\g\6\h\n\2\7\x\1\i\f\d\7\k\r\0\u\u\x\v\g\y\h\0\r\m\h\q\1\e\w\a\b\z\l\y\t\8\w\z\n\a\a\o\v\0\z\v\g\c\0\7\y\f\b\n\o\g\y\v\i\2\d\g\7\u\i\1\0\h\e\a\d\o\6\r\j\p\n\i\n\9\v\t\q\8\x\o\o\0\k\9\c\m\4\3\k\2\i\0\s\l\x\4\5\u\c\8\g\q\o\n\i\r\x\7\v\r\8\8\i\j\q\j\2\p\p\r\v\h\s\c\2\7\y\p\5\1\4\6\z\s\2\v\n\5\5\b\k\0\w\g\1\e\u\7\8\c\8\x\m\v\s\8\k\a\w\h\j\j\b\n\j\i\l\8\8\e\e\c\9\q\9\u\s\i\w\5\c\6\m\5\v\l\8\0\p\0\k\l\f\v\a\w\0\y\6\a\z\7\s\n\8\d\i\5\e\z\5\6\7\z\u\w\1\p\4\q\t\9\y\n\4\f\k\i\x\6\9\n\3\8\6\l\n\p\b\7\h\j\6\g\s\2\l\p\t\a\c\c\n\r\n\h\t\m\q\r\c\r\v\s\p\0\j\0\9\l\j\4\c\l\n\2\r\a\q\2\e\r\x\c\8\5\y\b\n\9\s\g\n\d\9\z\f\h\s\s\k\8\y\r\n\b\z\8\0\l\5\d\y\i\a\3\a\g\v\k\9\2\s\4\6\a\h\l\j\o\5\t\r\l\w\b\s\k\y\b\m\j\j\q\2\p\k\r\h\c\3\f\9\w\n\0\w\j\y\d\0\v\i\9\p\w\2\7\c\0\o\v\e\m\3\x\v\p\y\l\w\j\y\h\y\a\4\5\y\l\c\3\7\x\8\g\q\h\p\z\k\4\s\2\i\u\x\0\g\x\k\i\q\8\y\x\u\r\c\g\e\b\h\s\5\m\1\9\x\t\7\e\v\z\3\a\l\c\w\h\w\2\0\9\g\y\u\w\1\u\a\n\f\k\p\z\u\1\g\h\b\p\q\3\6\h\v\0\0\b\v\k\x\t\6\p\l\g\t\w\p\q\6\4\6\j\u\c\8\d\z\q\8\w\a\e\o\r\9\r\d\7\z\n\8\u\3\0\0\b\m\6\3\c\h\v\q\7\k\5\2\k\6\7\i\d\n\h\m\o\6\4\e\x\x\3\b\g\k\e\x\x\i\j\j\r\l\d\d\j\0\l\f\l\s\6\7\y\w\n\2\b\p\3\7\m\m\r\4\9\q\t\j\e\4\o\d\1\1\4\1\p\m\8\t\z\0\y\o\1\d\f\n\z\a\p\r\i\7\2\l\y\b\2\j\2\j\g\l\r\b\f\4\j\c\9\9\s\3\x\g\q\n\7\8\o\a\9\4\0\9\4\7\q\w\y\7\f\k\r\h\p\t\c\u\c\c\l\l\g\3\i\z\u\3\v\y\l\e\i\p\i\t\q\d\s\9\7\n\b\7\y\w\r\u\2\4\2\d\7\z\v\t\8\j\p\n\t\m\8\4\1\2\d\p\j\4\o\6\g\p\a\c\o\1\d\9\f\b\n\e\z\e\a\5\4\p\w\x\3\3\c\u\j\r\1\c\h\2\k\f\q\1\u\3\w\r\0\i\d\2\d\8\o\y\5\9\4\c\3\d\y\b\0\l\c\9\8\u\e\s\3\d\k\v\w\8\u\2\h\v\s\i\s\8\a\4\5\s\7\9\5\p\b\w\7\o\d\n\x\r\8\x\z\4\w\x\p\x\y\8\x\z\p\5\t\s\z\g\t\8\s\b\r\r\i\e\l\3\d\a\0\y\r\x\s\u\2\i\i\q\9\v\0\b\k\z\r\3\g\w\p\i\c\m\w\j\3\j\2\d\7\g\x\3\n\2\v\e\e\q\b\k\f\s\p\3\x\5\b\8\k\b\j\1\w\b\n\r\9\e\7\s\e\a\u\e\n\2\f\v\e\t\s\6\r\i\4\p\3\m\3\g\j\t\w\z\i\w\u\b\7\y\8\2\t\n\r\k\v\w\8\7\x\1\9\k\t\w\6\6\l\b\n\2\5\u\f\v\o\q\k\d\l\q\g\n\a\6\s\3\f\k\o\e\f\a\1\6\o\8\o\q\h\s\n\z\9\p\3\x\7\s\1\w\q\r\g\i\z\s\0\0\v\e\j\n\a\7\t\5\2\a\9\b\k\r\7\z\y\u\m\f\x\1\p\l\o\d\6\3\5\j\h\5\v\t\y\8\5\r\e\r\8\j\i\c\r\h\8\r\l\o\z\j\r\s\8\z\u\z\y\x\r\8\n\j\v\c\d\b\e\u\9\2\e\d\4\i\j\8\9\6\3\u\c\l\0\m\l\z\x\f\l\e\0\h\q\e\t\s\8\v\8\z\y\p\6\p\k\p\3\j\0\4\h\b\3\7\s\d\h\q\k\0\7\i\8\1\j\8\n\6\c\5\t\5\5\i\z\x\s\i\w\z\l\j\c\m\r\9\4\e\7\7\c\n\n\o\g\e\l\f\k\n\l\d\g\q\d\a\y\p\6\q\k\k\q\j\2\e\6\n\n\8\3\c\f\t\3\a\z\6\t\j\e\l\h\7\8\2\x\g\g\8\6\n\3\u\e\6\0\8\o\t\7\t\x\y\5\7\o\9\l\a\m\b\0\d\e\s\j\7\4\k\2\o\j\1\p\4\h\w\z\o\j\4\x\k\k\z\u\s\2\4\c\o\k\l\m\e\r\u\9\8\9\z\u\2\4\p\c\p\m\1\k\y\y\f\y\1\7\p\g\4\b\r\l\i\j\8\o\6\j\u\b\w\0\w\k\1\n\5\q\g\d\r\x\l\n\0\d\e\s\p\3\l\1\9\i\d\0\x\0\z\a\j\h\i\q\w\y\c\d\1\6\e\i\l\3\c\u\o\g\x\b\m\t\v\d\l\k\k\r\y\l\d\l\h\1\c\5\8\1\t\a\x\e\7\c\x\1\d\5\1\a\q\p\n\4\g\j\0\t\2\r\k\m\8\3\0\v\t\j\o\d\9\x\d\g\r\c\0\k\8\z\k\g\l\6\z\8\c\o\w\q\d\y\h\i\n\r\b\e\k\p\t\4\v\4\h\e\3\3\x\j\3\g\i\2\a\d\3\b\4\i\2\z\w\w\1\8\7\g\v\q\u\h\j\i\1\s\n\e\l\q\s\z\c\1\q\l\1\e\p\w\b\y\w\5\3\y\c\0\q\6\l\u\2\l\3\q\u\2\6\m\z\u\a\7\7\w\t\b\r\8\l\d\i\o\q\f\5\a\n\g\c\o\t\j\v\x\8\z\w\p\n\r\9\t\4\t\y\u\t\b\w\m\4\4\i\l\1\5\l\s\7\g\1\l\4\w\u\a\r\d\7\p\i\f\y\t\n\t\9\h\m\h\l\4\6\c\z\y\j\g\k\a\j\4\8\9\d\h\v\8\y\z\m\5\j\v\l\4\9\h\f\w\l\8\o\p\1\q\c\v\u\8\5\p\d\q\t\o\4\r\u\s\u\j\y\b\f\x\n\7\8\5\8\3\g\a\m\2\8\j\d\i\0\p\v\h\e\8\3\9\n\d\z\z\s\m\e\f\q\e\l\g\z\z\g\r\r\a\a\f\0\w\8\6\2\0\6\0\u\r\a\3\r\p\a\q\f\1\f\x\9\d ]] 00:06:44.227 00:06:44.227 real 0m1.463s 00:06:44.227 user 0m1.040s 00:06:44.227 sys 0m0.589s 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 ************************************ 00:06:44.227 END TEST dd_rw_offset 00:06:44.227 ************************************ 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.227 13:49:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 { 00:06:44.227 "subsystems": [ 00:06:44.227 { 00:06:44.227 "subsystem": "bdev", 00:06:44.227 "config": [ 00:06:44.227 { 00:06:44.227 "params": { 00:06:44.227 "trtype": "pcie", 00:06:44.227 "traddr": "0000:00:10.0", 00:06:44.227 "name": "Nvme0" 00:06:44.227 }, 00:06:44.227 "method": "bdev_nvme_attach_controller" 00:06:44.227 }, 00:06:44.227 { 00:06:44.227 "method": "bdev_wait_for_examine" 00:06:44.227 } 00:06:44.227 ] 00:06:44.227 } 00:06:44.227 ] 00:06:44.227 } 00:06:44.227 [2024-07-25 13:49:33.246111] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:44.227 [2024-07-25 13:49:33.246210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61984 ] 00:06:44.487 [2024-07-25 13:49:33.386511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.487 [2024-07-25 13:49:33.496547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.746 [2024-07-25 13:49:33.550027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.004  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:45.004 00:06:45.004 13:49:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.004 00:06:45.004 real 0m19.395s 00:06:45.004 user 0m14.131s 00:06:45.004 sys 0m6.931s 00:06:45.004 13:49:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.004 13:49:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.004 ************************************ 00:06:45.004 END TEST spdk_dd_basic_rw 00:06:45.004 ************************************ 00:06:45.004 13:49:33 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:45.004 13:49:33 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.004 13:49:33 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.004 13:49:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:45.004 ************************************ 00:06:45.004 START TEST spdk_dd_posix 00:06:45.004 ************************************ 00:06:45.004 13:49:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:45.004 * Looking for test storage... 00:06:45.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:45.004 * First test run, liburing in use 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.004 ************************************ 00:06:45.004 START TEST dd_flag_append 00:06:45.004 ************************************ 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:45.004 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=7h9wprucb07mfrvunsdnkq7v66znp1je 00:06:45.005 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:45.005 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:45.005 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:45.005 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=fdve0s5b5rh0tb30hf4sk243jdzxu5he 00:06:45.005 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 7h9wprucb07mfrvunsdnkq7v66znp1je 00:06:45.005 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s fdve0s5b5rh0tb30hf4sk243jdzxu5he 00:06:45.005 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:45.262 [2024-07-25 13:49:34.085068] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:45.262 [2024-07-25 13:49:34.085168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62038 ] 00:06:45.262 [2024-07-25 13:49:34.219480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.520 [2024-07-25 13:49:34.332633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.520 [2024-07-25 13:49:34.386476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.779  Copying: 32/32 [B] (average 31 kBps) 00:06:45.779 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ fdve0s5b5rh0tb30hf4sk243jdzxu5he7h9wprucb07mfrvunsdnkq7v66znp1je == \f\d\v\e\0\s\5\b\5\r\h\0\t\b\3\0\h\f\4\s\k\2\4\3\j\d\z\x\u\5\h\e\7\h\9\w\p\r\u\c\b\0\7\m\f\r\v\u\n\s\d\n\k\q\7\v\6\6\z\n\p\1\j\e ]] 00:06:45.779 00:06:45.779 real 0m0.606s 00:06:45.779 user 0m0.349s 00:06:45.779 sys 0m0.270s 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:45.779 ************************************ 00:06:45.779 END TEST dd_flag_append 00:06:45.779 ************************************ 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.779 ************************************ 00:06:45.779 START TEST dd_flag_directory 00:06:45.779 ************************************ 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.779 13:49:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.779 [2024-07-25 13:49:34.749083] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:45.779 [2024-07-25 13:49:34.749203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:06:46.037 [2024-07-25 13:49:34.887038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.037 [2024-07-25 13:49:35.012934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.296 [2024-07-25 13:49:35.069337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.296 [2024-07-25 13:49:35.103654] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.296 [2024-07-25 13:49:35.103707] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.296 [2024-07-25 13:49:35.103738] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.296 [2024-07-25 13:49:35.215406] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.296 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.554 [2024-07-25 13:49:35.375834] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:46.555 [2024-07-25 13:49:35.375945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62089 ] 00:06:46.555 [2024-07-25 13:49:35.514970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.813 [2024-07-25 13:49:35.631123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.813 [2024-07-25 13:49:35.683678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.813 [2024-07-25 13:49:35.717160] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.813 [2024-07-25 13:49:35.717214] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.813 [2024-07-25 13:49:35.717246] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.813 [2024-07-25 13:49:35.827814] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:47.072 ************************************ 00:06:47.072 END TEST dd_flag_directory 00:06:47.072 ************************************ 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.072 00:06:47.072 real 0m1.250s 00:06:47.072 user 0m0.734s 00:06:47.072 sys 0m0.307s 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:47.072 ************************************ 00:06:47.072 START TEST dd_flag_nofollow 00:06:47.072 ************************************ 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:47.072 13:49:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.072 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.072 [2024-07-25 13:49:36.063151] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:47.072 [2024-07-25 13:49:36.063264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62112 ] 00:06:47.331 [2024-07-25 13:49:36.202796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.331 [2024-07-25 13:49:36.315870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.590 [2024-07-25 13:49:36.367945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.590 [2024-07-25 13:49:36.400279] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:47.590 [2024-07-25 13:49:36.400369] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:47.590 [2024-07-25 13:49:36.400402] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.590 [2024-07-25 13:49:36.508573] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.590 13:49:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.849 [2024-07-25 13:49:36.680538] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:47.849 [2024-07-25 13:49:36.680681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62127 ] 00:06:47.849 [2024-07-25 13:49:36.823132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.108 [2024-07-25 13:49:36.936966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.108 [2024-07-25 13:49:36.989184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.108 [2024-07-25 13:49:37.021282] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:48.108 [2024-07-25 13:49:37.021369] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:48.108 [2024-07-25 13:49:37.021388] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.108 [2024-07-25 13:49:37.129990] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:48.367 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.367 [2024-07-25 13:49:37.295398] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:48.367 [2024-07-25 13:49:37.295697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62129 ] 00:06:48.626 [2024-07-25 13:49:37.432261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.626 [2024-07-25 13:49:37.547004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.626 [2024-07-25 13:49:37.598999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.885  Copying: 512/512 [B] (average 500 kBps) 00:06:48.885 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ioenbzfxrvstox3huoshlylkpazpetebc2lxguxuk94rc7oqnds6vt1isjlhc47i9js8dz7f7nmel3qnulvsw1h6v98p3jn96hsu5ap1y9a5n204k45p450zmxjqjr1318a0ojn0tucwoqejo54y1qkjupz8cqaoprv5z1poy4bsys9tda1lthln258edjqogmnem67nny8nt1ce9dqflorzro2qeo6cti3b9kn9ez2dnh3yjfh7on81ado5pk0xdk1yvxlmfde0fqjqbwjwc382dizemcv1siumn7xt2gj9exi28dgem73prfrtl8jemzz8xcqhs87uu8h86mtuwyx2wbnp057zxgbppkgy0n822lifwmzdkjkpwtnlmed6lm4zhi2a78bk4299l8molp14zz9dg7el5q2zbn242dxqnudzw5bi8sc4ujnsqfkp9l3kphr0kfdo3wl326ih8es8vndj8uu535kpdp6do4sjlakhvrgn7o2fr9u4ct6y == \i\o\e\n\b\z\f\x\r\v\s\t\o\x\3\h\u\o\s\h\l\y\l\k\p\a\z\p\e\t\e\b\c\2\l\x\g\u\x\u\k\9\4\r\c\7\o\q\n\d\s\6\v\t\1\i\s\j\l\h\c\4\7\i\9\j\s\8\d\z\7\f\7\n\m\e\l\3\q\n\u\l\v\s\w\1\h\6\v\9\8\p\3\j\n\9\6\h\s\u\5\a\p\1\y\9\a\5\n\2\0\4\k\4\5\p\4\5\0\z\m\x\j\q\j\r\1\3\1\8\a\0\o\j\n\0\t\u\c\w\o\q\e\j\o\5\4\y\1\q\k\j\u\p\z\8\c\q\a\o\p\r\v\5\z\1\p\o\y\4\b\s\y\s\9\t\d\a\1\l\t\h\l\n\2\5\8\e\d\j\q\o\g\m\n\e\m\6\7\n\n\y\8\n\t\1\c\e\9\d\q\f\l\o\r\z\r\o\2\q\e\o\6\c\t\i\3\b\9\k\n\9\e\z\2\d\n\h\3\y\j\f\h\7\o\n\8\1\a\d\o\5\p\k\0\x\d\k\1\y\v\x\l\m\f\d\e\0\f\q\j\q\b\w\j\w\c\3\8\2\d\i\z\e\m\c\v\1\s\i\u\m\n\7\x\t\2\g\j\9\e\x\i\2\8\d\g\e\m\7\3\p\r\f\r\t\l\8\j\e\m\z\z\8\x\c\q\h\s\8\7\u\u\8\h\8\6\m\t\u\w\y\x\2\w\b\n\p\0\5\7\z\x\g\b\p\p\k\g\y\0\n\8\2\2\l\i\f\w\m\z\d\k\j\k\p\w\t\n\l\m\e\d\6\l\m\4\z\h\i\2\a\7\8\b\k\4\2\9\9\l\8\m\o\l\p\1\4\z\z\9\d\g\7\e\l\5\q\2\z\b\n\2\4\2\d\x\q\n\u\d\z\w\5\b\i\8\s\c\4\u\j\n\s\q\f\k\p\9\l\3\k\p\h\r\0\k\f\d\o\3\w\l\3\2\6\i\h\8\e\s\8\v\n\d\j\8\u\u\5\3\5\k\p\d\p\6\d\o\4\s\j\l\a\k\h\v\r\g\n\7\o\2\f\r\9\u\4\c\t\6\y ]] 00:06:48.885 00:06:48.885 real 0m1.848s 00:06:48.885 user 0m1.100s 00:06:48.885 sys 0m0.548s 00:06:48.885 ************************************ 00:06:48.885 END TEST dd_flag_nofollow 00:06:48.885 ************************************ 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:48.885 ************************************ 00:06:48.885 START TEST dd_flag_noatime 00:06:48.885 ************************************ 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721915377 00:06:48.885 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.144 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721915377 00:06:49.144 13:49:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:50.080 13:49:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.080 [2024-07-25 13:49:38.974966] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:50.080 [2024-07-25 13:49:38.975080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62178 ] 00:06:50.339 [2024-07-25 13:49:39.115869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.339 [2024-07-25 13:49:39.243123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.339 [2024-07-25 13:49:39.295359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.598  Copying: 512/512 [B] (average 500 kBps) 00:06:50.598 00:06:50.598 13:49:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.598 13:49:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721915377 )) 00:06:50.598 13:49:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.598 13:49:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721915377 )) 00:06:50.598 13:49:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.598 [2024-07-25 13:49:39.603500] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:50.598 [2024-07-25 13:49:39.603611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62197 ] 00:06:50.856 [2024-07-25 13:49:39.741739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.856 [2024-07-25 13:49:39.856813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.115 [2024-07-25 13:49:39.908506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.115  Copying: 512/512 [B] (average 500 kBps) 00:06:51.115 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.373 ************************************ 00:06:51.373 END TEST dd_flag_noatime 00:06:51.373 ************************************ 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721915379 )) 00:06:51.373 00:06:51.373 real 0m2.255s 00:06:51.373 user 0m0.749s 00:06:51.373 sys 0m0.533s 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.373 ************************************ 00:06:51.373 START TEST dd_flags_misc 00:06:51.373 ************************************ 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.373 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:51.373 [2024-07-25 13:49:40.260695] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:51.373 [2024-07-25 13:49:40.260820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62220 ] 00:06:51.373 [2024-07-25 13:49:40.395571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.630 [2024-07-25 13:49:40.509155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.630 [2024-07-25 13:49:40.561523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.888  Copying: 512/512 [B] (average 500 kBps) 00:06:51.888 00:06:51.889 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cokj1p1ubyvzhq0k1fnh65yxxrebmgc0lobmttx9oiqp1vmwudpzhcup5p7nv1dzs2f3iux53p6h5a20fs7435q4rwy67649fxbpbehx2ivzhp0bh2ibt98lylsolvsb54ig80pea0ruqmxtq6znky6g1iaobc6ldsst3nd9x2z966wj6fz6lvyz3dcybblassahkplpgpyz7cd4nye954p010mpwwz2qevaowvyuctj4ardki5fby7ub8zce281200dwmvhwnahf7kgbs8755jd54kaa9xp6m4c7mz4g5847g7dzy6fq76enw1hvnxyzdghvcnfn39pftgdzxubhleunwvhtdry0dwnivoi34cyd3cq222jtrk210elmdo79zqdf4643l2706f94kpwvwd4hqdhdksi1pq2uzn0b6tpv4alsxhp24hyxz6hspfs6q1i9bua7wdwvx3qwkibl1ggdb8vh3334nhdgai58aga4vho90p475x77ytle3o9 == \c\o\k\j\1\p\1\u\b\y\v\z\h\q\0\k\1\f\n\h\6\5\y\x\x\r\e\b\m\g\c\0\l\o\b\m\t\t\x\9\o\i\q\p\1\v\m\w\u\d\p\z\h\c\u\p\5\p\7\n\v\1\d\z\s\2\f\3\i\u\x\5\3\p\6\h\5\a\2\0\f\s\7\4\3\5\q\4\r\w\y\6\7\6\4\9\f\x\b\p\b\e\h\x\2\i\v\z\h\p\0\b\h\2\i\b\t\9\8\l\y\l\s\o\l\v\s\b\5\4\i\g\8\0\p\e\a\0\r\u\q\m\x\t\q\6\z\n\k\y\6\g\1\i\a\o\b\c\6\l\d\s\s\t\3\n\d\9\x\2\z\9\6\6\w\j\6\f\z\6\l\v\y\z\3\d\c\y\b\b\l\a\s\s\a\h\k\p\l\p\g\p\y\z\7\c\d\4\n\y\e\9\5\4\p\0\1\0\m\p\w\w\z\2\q\e\v\a\o\w\v\y\u\c\t\j\4\a\r\d\k\i\5\f\b\y\7\u\b\8\z\c\e\2\8\1\2\0\0\d\w\m\v\h\w\n\a\h\f\7\k\g\b\s\8\7\5\5\j\d\5\4\k\a\a\9\x\p\6\m\4\c\7\m\z\4\g\5\8\4\7\g\7\d\z\y\6\f\q\7\6\e\n\w\1\h\v\n\x\y\z\d\g\h\v\c\n\f\n\3\9\p\f\t\g\d\z\x\u\b\h\l\e\u\n\w\v\h\t\d\r\y\0\d\w\n\i\v\o\i\3\4\c\y\d\3\c\q\2\2\2\j\t\r\k\2\1\0\e\l\m\d\o\7\9\z\q\d\f\4\6\4\3\l\2\7\0\6\f\9\4\k\p\w\v\w\d\4\h\q\d\h\d\k\s\i\1\p\q\2\u\z\n\0\b\6\t\p\v\4\a\l\s\x\h\p\2\4\h\y\x\z\6\h\s\p\f\s\6\q\1\i\9\b\u\a\7\w\d\w\v\x\3\q\w\k\i\b\l\1\g\g\d\b\8\v\h\3\3\3\4\n\h\d\g\a\i\5\8\a\g\a\4\v\h\o\9\0\p\4\7\5\x\7\7\y\t\l\e\3\o\9 ]] 00:06:51.889 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.889 13:49:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:51.889 [2024-07-25 13:49:40.858478] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:51.889 [2024-07-25 13:49:40.858581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62235 ] 00:06:52.146 [2024-07-25 13:49:40.995681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.146 [2024-07-25 13:49:41.111098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.146 [2024-07-25 13:49:41.162558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.407  Copying: 512/512 [B] (average 500 kBps) 00:06:52.407 00:06:52.407 13:49:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cokj1p1ubyvzhq0k1fnh65yxxrebmgc0lobmttx9oiqp1vmwudpzhcup5p7nv1dzs2f3iux53p6h5a20fs7435q4rwy67649fxbpbehx2ivzhp0bh2ibt98lylsolvsb54ig80pea0ruqmxtq6znky6g1iaobc6ldsst3nd9x2z966wj6fz6lvyz3dcybblassahkplpgpyz7cd4nye954p010mpwwz2qevaowvyuctj4ardki5fby7ub8zce281200dwmvhwnahf7kgbs8755jd54kaa9xp6m4c7mz4g5847g7dzy6fq76enw1hvnxyzdghvcnfn39pftgdzxubhleunwvhtdry0dwnivoi34cyd3cq222jtrk210elmdo79zqdf4643l2706f94kpwvwd4hqdhdksi1pq2uzn0b6tpv4alsxhp24hyxz6hspfs6q1i9bua7wdwvx3qwkibl1ggdb8vh3334nhdgai58aga4vho90p475x77ytle3o9 == \c\o\k\j\1\p\1\u\b\y\v\z\h\q\0\k\1\f\n\h\6\5\y\x\x\r\e\b\m\g\c\0\l\o\b\m\t\t\x\9\o\i\q\p\1\v\m\w\u\d\p\z\h\c\u\p\5\p\7\n\v\1\d\z\s\2\f\3\i\u\x\5\3\p\6\h\5\a\2\0\f\s\7\4\3\5\q\4\r\w\y\6\7\6\4\9\f\x\b\p\b\e\h\x\2\i\v\z\h\p\0\b\h\2\i\b\t\9\8\l\y\l\s\o\l\v\s\b\5\4\i\g\8\0\p\e\a\0\r\u\q\m\x\t\q\6\z\n\k\y\6\g\1\i\a\o\b\c\6\l\d\s\s\t\3\n\d\9\x\2\z\9\6\6\w\j\6\f\z\6\l\v\y\z\3\d\c\y\b\b\l\a\s\s\a\h\k\p\l\p\g\p\y\z\7\c\d\4\n\y\e\9\5\4\p\0\1\0\m\p\w\w\z\2\q\e\v\a\o\w\v\y\u\c\t\j\4\a\r\d\k\i\5\f\b\y\7\u\b\8\z\c\e\2\8\1\2\0\0\d\w\m\v\h\w\n\a\h\f\7\k\g\b\s\8\7\5\5\j\d\5\4\k\a\a\9\x\p\6\m\4\c\7\m\z\4\g\5\8\4\7\g\7\d\z\y\6\f\q\7\6\e\n\w\1\h\v\n\x\y\z\d\g\h\v\c\n\f\n\3\9\p\f\t\g\d\z\x\u\b\h\l\e\u\n\w\v\h\t\d\r\y\0\d\w\n\i\v\o\i\3\4\c\y\d\3\c\q\2\2\2\j\t\r\k\2\1\0\e\l\m\d\o\7\9\z\q\d\f\4\6\4\3\l\2\7\0\6\f\9\4\k\p\w\v\w\d\4\h\q\d\h\d\k\s\i\1\p\q\2\u\z\n\0\b\6\t\p\v\4\a\l\s\x\h\p\2\4\h\y\x\z\6\h\s\p\f\s\6\q\1\i\9\b\u\a\7\w\d\w\v\x\3\q\w\k\i\b\l\1\g\g\d\b\8\v\h\3\3\3\4\n\h\d\g\a\i\5\8\a\g\a\4\v\h\o\9\0\p\4\7\5\x\7\7\y\t\l\e\3\o\9 ]] 00:06:52.407 13:49:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.407 13:49:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:52.678 [2024-07-25 13:49:41.440173] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:52.678 [2024-07-25 13:49:41.440264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62245 ] 00:06:52.678 [2024-07-25 13:49:41.573439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.678 [2024-07-25 13:49:41.688259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.936 [2024-07-25 13:49:41.740633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.194  Copying: 512/512 [B] (average 250 kBps) 00:06:53.194 00:06:53.194 13:49:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cokj1p1ubyvzhq0k1fnh65yxxrebmgc0lobmttx9oiqp1vmwudpzhcup5p7nv1dzs2f3iux53p6h5a20fs7435q4rwy67649fxbpbehx2ivzhp0bh2ibt98lylsolvsb54ig80pea0ruqmxtq6znky6g1iaobc6ldsst3nd9x2z966wj6fz6lvyz3dcybblassahkplpgpyz7cd4nye954p010mpwwz2qevaowvyuctj4ardki5fby7ub8zce281200dwmvhwnahf7kgbs8755jd54kaa9xp6m4c7mz4g5847g7dzy6fq76enw1hvnxyzdghvcnfn39pftgdzxubhleunwvhtdry0dwnivoi34cyd3cq222jtrk210elmdo79zqdf4643l2706f94kpwvwd4hqdhdksi1pq2uzn0b6tpv4alsxhp24hyxz6hspfs6q1i9bua7wdwvx3qwkibl1ggdb8vh3334nhdgai58aga4vho90p475x77ytle3o9 == \c\o\k\j\1\p\1\u\b\y\v\z\h\q\0\k\1\f\n\h\6\5\y\x\x\r\e\b\m\g\c\0\l\o\b\m\t\t\x\9\o\i\q\p\1\v\m\w\u\d\p\z\h\c\u\p\5\p\7\n\v\1\d\z\s\2\f\3\i\u\x\5\3\p\6\h\5\a\2\0\f\s\7\4\3\5\q\4\r\w\y\6\7\6\4\9\f\x\b\p\b\e\h\x\2\i\v\z\h\p\0\b\h\2\i\b\t\9\8\l\y\l\s\o\l\v\s\b\5\4\i\g\8\0\p\e\a\0\r\u\q\m\x\t\q\6\z\n\k\y\6\g\1\i\a\o\b\c\6\l\d\s\s\t\3\n\d\9\x\2\z\9\6\6\w\j\6\f\z\6\l\v\y\z\3\d\c\y\b\b\l\a\s\s\a\h\k\p\l\p\g\p\y\z\7\c\d\4\n\y\e\9\5\4\p\0\1\0\m\p\w\w\z\2\q\e\v\a\o\w\v\y\u\c\t\j\4\a\r\d\k\i\5\f\b\y\7\u\b\8\z\c\e\2\8\1\2\0\0\d\w\m\v\h\w\n\a\h\f\7\k\g\b\s\8\7\5\5\j\d\5\4\k\a\a\9\x\p\6\m\4\c\7\m\z\4\g\5\8\4\7\g\7\d\z\y\6\f\q\7\6\e\n\w\1\h\v\n\x\y\z\d\g\h\v\c\n\f\n\3\9\p\f\t\g\d\z\x\u\b\h\l\e\u\n\w\v\h\t\d\r\y\0\d\w\n\i\v\o\i\3\4\c\y\d\3\c\q\2\2\2\j\t\r\k\2\1\0\e\l\m\d\o\7\9\z\q\d\f\4\6\4\3\l\2\7\0\6\f\9\4\k\p\w\v\w\d\4\h\q\d\h\d\k\s\i\1\p\q\2\u\z\n\0\b\6\t\p\v\4\a\l\s\x\h\p\2\4\h\y\x\z\6\h\s\p\f\s\6\q\1\i\9\b\u\a\7\w\d\w\v\x\3\q\w\k\i\b\l\1\g\g\d\b\8\v\h\3\3\3\4\n\h\d\g\a\i\5\8\a\g\a\4\v\h\o\9\0\p\4\7\5\x\7\7\y\t\l\e\3\o\9 ]] 00:06:53.194 13:49:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:53.194 13:49:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:53.194 [2024-07-25 13:49:42.039799] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:53.194 [2024-07-25 13:49:42.039915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62254 ] 00:06:53.194 [2024-07-25 13:49:42.177023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.452 [2024-07-25 13:49:42.292486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.452 [2024-07-25 13:49:42.344491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.711  Copying: 512/512 [B] (average 250 kBps) 00:06:53.711 00:06:53.711 13:49:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cokj1p1ubyvzhq0k1fnh65yxxrebmgc0lobmttx9oiqp1vmwudpzhcup5p7nv1dzs2f3iux53p6h5a20fs7435q4rwy67649fxbpbehx2ivzhp0bh2ibt98lylsolvsb54ig80pea0ruqmxtq6znky6g1iaobc6ldsst3nd9x2z966wj6fz6lvyz3dcybblassahkplpgpyz7cd4nye954p010mpwwz2qevaowvyuctj4ardki5fby7ub8zce281200dwmvhwnahf7kgbs8755jd54kaa9xp6m4c7mz4g5847g7dzy6fq76enw1hvnxyzdghvcnfn39pftgdzxubhleunwvhtdry0dwnivoi34cyd3cq222jtrk210elmdo79zqdf4643l2706f94kpwvwd4hqdhdksi1pq2uzn0b6tpv4alsxhp24hyxz6hspfs6q1i9bua7wdwvx3qwkibl1ggdb8vh3334nhdgai58aga4vho90p475x77ytle3o9 == \c\o\k\j\1\p\1\u\b\y\v\z\h\q\0\k\1\f\n\h\6\5\y\x\x\r\e\b\m\g\c\0\l\o\b\m\t\t\x\9\o\i\q\p\1\v\m\w\u\d\p\z\h\c\u\p\5\p\7\n\v\1\d\z\s\2\f\3\i\u\x\5\3\p\6\h\5\a\2\0\f\s\7\4\3\5\q\4\r\w\y\6\7\6\4\9\f\x\b\p\b\e\h\x\2\i\v\z\h\p\0\b\h\2\i\b\t\9\8\l\y\l\s\o\l\v\s\b\5\4\i\g\8\0\p\e\a\0\r\u\q\m\x\t\q\6\z\n\k\y\6\g\1\i\a\o\b\c\6\l\d\s\s\t\3\n\d\9\x\2\z\9\6\6\w\j\6\f\z\6\l\v\y\z\3\d\c\y\b\b\l\a\s\s\a\h\k\p\l\p\g\p\y\z\7\c\d\4\n\y\e\9\5\4\p\0\1\0\m\p\w\w\z\2\q\e\v\a\o\w\v\y\u\c\t\j\4\a\r\d\k\i\5\f\b\y\7\u\b\8\z\c\e\2\8\1\2\0\0\d\w\m\v\h\w\n\a\h\f\7\k\g\b\s\8\7\5\5\j\d\5\4\k\a\a\9\x\p\6\m\4\c\7\m\z\4\g\5\8\4\7\g\7\d\z\y\6\f\q\7\6\e\n\w\1\h\v\n\x\y\z\d\g\h\v\c\n\f\n\3\9\p\f\t\g\d\z\x\u\b\h\l\e\u\n\w\v\h\t\d\r\y\0\d\w\n\i\v\o\i\3\4\c\y\d\3\c\q\2\2\2\j\t\r\k\2\1\0\e\l\m\d\o\7\9\z\q\d\f\4\6\4\3\l\2\7\0\6\f\9\4\k\p\w\v\w\d\4\h\q\d\h\d\k\s\i\1\p\q\2\u\z\n\0\b\6\t\p\v\4\a\l\s\x\h\p\2\4\h\y\x\z\6\h\s\p\f\s\6\q\1\i\9\b\u\a\7\w\d\w\v\x\3\q\w\k\i\b\l\1\g\g\d\b\8\v\h\3\3\3\4\n\h\d\g\a\i\5\8\a\g\a\4\v\h\o\9\0\p\4\7\5\x\7\7\y\t\l\e\3\o\9 ]] 00:06:53.711 13:49:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:53.711 13:49:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:53.711 13:49:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:53.711 13:49:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:53.711 13:49:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:53.711 13:49:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:53.711 [2024-07-25 13:49:42.672905] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:53.711 [2024-07-25 13:49:42.673216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62269 ] 00:06:53.969 [2024-07-25 13:49:42.806344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.969 [2024-07-25 13:49:42.921994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.969 [2024-07-25 13:49:42.973716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.228  Copying: 512/512 [B] (average 500 kBps) 00:06:54.228 00:06:54.228 13:49:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o6cywfk5wmcrdl8s5ic2c6jvdzxmyoqhtd04we3fr5275n7wlgorvcpu8sgbu8zh7ixpj1e0fszmqbj1y1p4ebk35rz2drmr85izrrzrecbkx8ekzyinary3kgebudht31dvpud4jfi8q3kmkkimxo5pjdknpcv2apfb3p9ytvg27nxcalcbdow0ppdl5zundviptxbkn6712ni2n1wihykrmpsw029y6tz58bjmlnly5yqdnei3mlidtdcp11f4t5q53frq26afvzcva2alsqm90zsb304tfeje3hnajnks8p23pm9a6p8rhvpiyeow2wc1dn1vthitbzg40k167pauph0xhlxem2940tvx00zcrfoa6v2v94d59upqk2dpkzlgb5giwi8eag7hb2tdmt3iumzf835u8p99zcexwfmtlypbd5mr9wa0ecsrcv28mbasaptkdzx98tads67c1e8dhqm5w09fe8r53fqpucnv1xuorb624pf7hxn25yk6 == \o\6\c\y\w\f\k\5\w\m\c\r\d\l\8\s\5\i\c\2\c\6\j\v\d\z\x\m\y\o\q\h\t\d\0\4\w\e\3\f\r\5\2\7\5\n\7\w\l\g\o\r\v\c\p\u\8\s\g\b\u\8\z\h\7\i\x\p\j\1\e\0\f\s\z\m\q\b\j\1\y\1\p\4\e\b\k\3\5\r\z\2\d\r\m\r\8\5\i\z\r\r\z\r\e\c\b\k\x\8\e\k\z\y\i\n\a\r\y\3\k\g\e\b\u\d\h\t\3\1\d\v\p\u\d\4\j\f\i\8\q\3\k\m\k\k\i\m\x\o\5\p\j\d\k\n\p\c\v\2\a\p\f\b\3\p\9\y\t\v\g\2\7\n\x\c\a\l\c\b\d\o\w\0\p\p\d\l\5\z\u\n\d\v\i\p\t\x\b\k\n\6\7\1\2\n\i\2\n\1\w\i\h\y\k\r\m\p\s\w\0\2\9\y\6\t\z\5\8\b\j\m\l\n\l\y\5\y\q\d\n\e\i\3\m\l\i\d\t\d\c\p\1\1\f\4\t\5\q\5\3\f\r\q\2\6\a\f\v\z\c\v\a\2\a\l\s\q\m\9\0\z\s\b\3\0\4\t\f\e\j\e\3\h\n\a\j\n\k\s\8\p\2\3\p\m\9\a\6\p\8\r\h\v\p\i\y\e\o\w\2\w\c\1\d\n\1\v\t\h\i\t\b\z\g\4\0\k\1\6\7\p\a\u\p\h\0\x\h\l\x\e\m\2\9\4\0\t\v\x\0\0\z\c\r\f\o\a\6\v\2\v\9\4\d\5\9\u\p\q\k\2\d\p\k\z\l\g\b\5\g\i\w\i\8\e\a\g\7\h\b\2\t\d\m\t\3\i\u\m\z\f\8\3\5\u\8\p\9\9\z\c\e\x\w\f\m\t\l\y\p\b\d\5\m\r\9\w\a\0\e\c\s\r\c\v\2\8\m\b\a\s\a\p\t\k\d\z\x\9\8\t\a\d\s\6\7\c\1\e\8\d\h\q\m\5\w\0\9\f\e\8\r\5\3\f\q\p\u\c\n\v\1\x\u\o\r\b\6\2\4\p\f\7\h\x\n\2\5\y\k\6 ]] 00:06:54.228 13:49:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.228 13:49:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:54.486 [2024-07-25 13:49:43.271548] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:54.486 [2024-07-25 13:49:43.271653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62273 ] 00:06:54.486 [2024-07-25 13:49:43.405811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.743 [2024-07-25 13:49:43.521437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.743 [2024-07-25 13:49:43.573207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.000  Copying: 512/512 [B] (average 500 kBps) 00:06:55.000 00:06:55.000 13:49:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o6cywfk5wmcrdl8s5ic2c6jvdzxmyoqhtd04we3fr5275n7wlgorvcpu8sgbu8zh7ixpj1e0fszmqbj1y1p4ebk35rz2drmr85izrrzrecbkx8ekzyinary3kgebudht31dvpud4jfi8q3kmkkimxo5pjdknpcv2apfb3p9ytvg27nxcalcbdow0ppdl5zundviptxbkn6712ni2n1wihykrmpsw029y6tz58bjmlnly5yqdnei3mlidtdcp11f4t5q53frq26afvzcva2alsqm90zsb304tfeje3hnajnks8p23pm9a6p8rhvpiyeow2wc1dn1vthitbzg40k167pauph0xhlxem2940tvx00zcrfoa6v2v94d59upqk2dpkzlgb5giwi8eag7hb2tdmt3iumzf835u8p99zcexwfmtlypbd5mr9wa0ecsrcv28mbasaptkdzx98tads67c1e8dhqm5w09fe8r53fqpucnv1xuorb624pf7hxn25yk6 == \o\6\c\y\w\f\k\5\w\m\c\r\d\l\8\s\5\i\c\2\c\6\j\v\d\z\x\m\y\o\q\h\t\d\0\4\w\e\3\f\r\5\2\7\5\n\7\w\l\g\o\r\v\c\p\u\8\s\g\b\u\8\z\h\7\i\x\p\j\1\e\0\f\s\z\m\q\b\j\1\y\1\p\4\e\b\k\3\5\r\z\2\d\r\m\r\8\5\i\z\r\r\z\r\e\c\b\k\x\8\e\k\z\y\i\n\a\r\y\3\k\g\e\b\u\d\h\t\3\1\d\v\p\u\d\4\j\f\i\8\q\3\k\m\k\k\i\m\x\o\5\p\j\d\k\n\p\c\v\2\a\p\f\b\3\p\9\y\t\v\g\2\7\n\x\c\a\l\c\b\d\o\w\0\p\p\d\l\5\z\u\n\d\v\i\p\t\x\b\k\n\6\7\1\2\n\i\2\n\1\w\i\h\y\k\r\m\p\s\w\0\2\9\y\6\t\z\5\8\b\j\m\l\n\l\y\5\y\q\d\n\e\i\3\m\l\i\d\t\d\c\p\1\1\f\4\t\5\q\5\3\f\r\q\2\6\a\f\v\z\c\v\a\2\a\l\s\q\m\9\0\z\s\b\3\0\4\t\f\e\j\e\3\h\n\a\j\n\k\s\8\p\2\3\p\m\9\a\6\p\8\r\h\v\p\i\y\e\o\w\2\w\c\1\d\n\1\v\t\h\i\t\b\z\g\4\0\k\1\6\7\p\a\u\p\h\0\x\h\l\x\e\m\2\9\4\0\t\v\x\0\0\z\c\r\f\o\a\6\v\2\v\9\4\d\5\9\u\p\q\k\2\d\p\k\z\l\g\b\5\g\i\w\i\8\e\a\g\7\h\b\2\t\d\m\t\3\i\u\m\z\f\8\3\5\u\8\p\9\9\z\c\e\x\w\f\m\t\l\y\p\b\d\5\m\r\9\w\a\0\e\c\s\r\c\v\2\8\m\b\a\s\a\p\t\k\d\z\x\9\8\t\a\d\s\6\7\c\1\e\8\d\h\q\m\5\w\0\9\f\e\8\r\5\3\f\q\p\u\c\n\v\1\x\u\o\r\b\6\2\4\p\f\7\h\x\n\2\5\y\k\6 ]] 00:06:55.000 13:49:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.000 13:49:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:55.000 [2024-07-25 13:49:43.896009] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:55.000 [2024-07-25 13:49:43.896117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62288 ] 00:06:55.259 [2024-07-25 13:49:44.033409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.259 [2024-07-25 13:49:44.147766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.259 [2024-07-25 13:49:44.199030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.517  Copying: 512/512 [B] (average 166 kBps) 00:06:55.517 00:06:55.517 13:49:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o6cywfk5wmcrdl8s5ic2c6jvdzxmyoqhtd04we3fr5275n7wlgorvcpu8sgbu8zh7ixpj1e0fszmqbj1y1p4ebk35rz2drmr85izrrzrecbkx8ekzyinary3kgebudht31dvpud4jfi8q3kmkkimxo5pjdknpcv2apfb3p9ytvg27nxcalcbdow0ppdl5zundviptxbkn6712ni2n1wihykrmpsw029y6tz58bjmlnly5yqdnei3mlidtdcp11f4t5q53frq26afvzcva2alsqm90zsb304tfeje3hnajnks8p23pm9a6p8rhvpiyeow2wc1dn1vthitbzg40k167pauph0xhlxem2940tvx00zcrfoa6v2v94d59upqk2dpkzlgb5giwi8eag7hb2tdmt3iumzf835u8p99zcexwfmtlypbd5mr9wa0ecsrcv28mbasaptkdzx98tads67c1e8dhqm5w09fe8r53fqpucnv1xuorb624pf7hxn25yk6 == \o\6\c\y\w\f\k\5\w\m\c\r\d\l\8\s\5\i\c\2\c\6\j\v\d\z\x\m\y\o\q\h\t\d\0\4\w\e\3\f\r\5\2\7\5\n\7\w\l\g\o\r\v\c\p\u\8\s\g\b\u\8\z\h\7\i\x\p\j\1\e\0\f\s\z\m\q\b\j\1\y\1\p\4\e\b\k\3\5\r\z\2\d\r\m\r\8\5\i\z\r\r\z\r\e\c\b\k\x\8\e\k\z\y\i\n\a\r\y\3\k\g\e\b\u\d\h\t\3\1\d\v\p\u\d\4\j\f\i\8\q\3\k\m\k\k\i\m\x\o\5\p\j\d\k\n\p\c\v\2\a\p\f\b\3\p\9\y\t\v\g\2\7\n\x\c\a\l\c\b\d\o\w\0\p\p\d\l\5\z\u\n\d\v\i\p\t\x\b\k\n\6\7\1\2\n\i\2\n\1\w\i\h\y\k\r\m\p\s\w\0\2\9\y\6\t\z\5\8\b\j\m\l\n\l\y\5\y\q\d\n\e\i\3\m\l\i\d\t\d\c\p\1\1\f\4\t\5\q\5\3\f\r\q\2\6\a\f\v\z\c\v\a\2\a\l\s\q\m\9\0\z\s\b\3\0\4\t\f\e\j\e\3\h\n\a\j\n\k\s\8\p\2\3\p\m\9\a\6\p\8\r\h\v\p\i\y\e\o\w\2\w\c\1\d\n\1\v\t\h\i\t\b\z\g\4\0\k\1\6\7\p\a\u\p\h\0\x\h\l\x\e\m\2\9\4\0\t\v\x\0\0\z\c\r\f\o\a\6\v\2\v\9\4\d\5\9\u\p\q\k\2\d\p\k\z\l\g\b\5\g\i\w\i\8\e\a\g\7\h\b\2\t\d\m\t\3\i\u\m\z\f\8\3\5\u\8\p\9\9\z\c\e\x\w\f\m\t\l\y\p\b\d\5\m\r\9\w\a\0\e\c\s\r\c\v\2\8\m\b\a\s\a\p\t\k\d\z\x\9\8\t\a\d\s\6\7\c\1\e\8\d\h\q\m\5\w\0\9\f\e\8\r\5\3\f\q\p\u\c\n\v\1\x\u\o\r\b\6\2\4\p\f\7\h\x\n\2\5\y\k\6 ]] 00:06:55.517 13:49:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.517 13:49:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:55.517 [2024-07-25 13:49:44.495098] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:55.517 [2024-07-25 13:49:44.495202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62300 ] 00:06:55.775 [2024-07-25 13:49:44.630050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.775 [2024-07-25 13:49:44.747098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.775 [2024-07-25 13:49:44.799050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.290  Copying: 512/512 [B] (average 250 kBps) 00:06:56.290 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o6cywfk5wmcrdl8s5ic2c6jvdzxmyoqhtd04we3fr5275n7wlgorvcpu8sgbu8zh7ixpj1e0fszmqbj1y1p4ebk35rz2drmr85izrrzrecbkx8ekzyinary3kgebudht31dvpud4jfi8q3kmkkimxo5pjdknpcv2apfb3p9ytvg27nxcalcbdow0ppdl5zundviptxbkn6712ni2n1wihykrmpsw029y6tz58bjmlnly5yqdnei3mlidtdcp11f4t5q53frq26afvzcva2alsqm90zsb304tfeje3hnajnks8p23pm9a6p8rhvpiyeow2wc1dn1vthitbzg40k167pauph0xhlxem2940tvx00zcrfoa6v2v94d59upqk2dpkzlgb5giwi8eag7hb2tdmt3iumzf835u8p99zcexwfmtlypbd5mr9wa0ecsrcv28mbasaptkdzx98tads67c1e8dhqm5w09fe8r53fqpucnv1xuorb624pf7hxn25yk6 == \o\6\c\y\w\f\k\5\w\m\c\r\d\l\8\s\5\i\c\2\c\6\j\v\d\z\x\m\y\o\q\h\t\d\0\4\w\e\3\f\r\5\2\7\5\n\7\w\l\g\o\r\v\c\p\u\8\s\g\b\u\8\z\h\7\i\x\p\j\1\e\0\f\s\z\m\q\b\j\1\y\1\p\4\e\b\k\3\5\r\z\2\d\r\m\r\8\5\i\z\r\r\z\r\e\c\b\k\x\8\e\k\z\y\i\n\a\r\y\3\k\g\e\b\u\d\h\t\3\1\d\v\p\u\d\4\j\f\i\8\q\3\k\m\k\k\i\m\x\o\5\p\j\d\k\n\p\c\v\2\a\p\f\b\3\p\9\y\t\v\g\2\7\n\x\c\a\l\c\b\d\o\w\0\p\p\d\l\5\z\u\n\d\v\i\p\t\x\b\k\n\6\7\1\2\n\i\2\n\1\w\i\h\y\k\r\m\p\s\w\0\2\9\y\6\t\z\5\8\b\j\m\l\n\l\y\5\y\q\d\n\e\i\3\m\l\i\d\t\d\c\p\1\1\f\4\t\5\q\5\3\f\r\q\2\6\a\f\v\z\c\v\a\2\a\l\s\q\m\9\0\z\s\b\3\0\4\t\f\e\j\e\3\h\n\a\j\n\k\s\8\p\2\3\p\m\9\a\6\p\8\r\h\v\p\i\y\e\o\w\2\w\c\1\d\n\1\v\t\h\i\t\b\z\g\4\0\k\1\6\7\p\a\u\p\h\0\x\h\l\x\e\m\2\9\4\0\t\v\x\0\0\z\c\r\f\o\a\6\v\2\v\9\4\d\5\9\u\p\q\k\2\d\p\k\z\l\g\b\5\g\i\w\i\8\e\a\g\7\h\b\2\t\d\m\t\3\i\u\m\z\f\8\3\5\u\8\p\9\9\z\c\e\x\w\f\m\t\l\y\p\b\d\5\m\r\9\w\a\0\e\c\s\r\c\v\2\8\m\b\a\s\a\p\t\k\d\z\x\9\8\t\a\d\s\6\7\c\1\e\8\d\h\q\m\5\w\0\9\f\e\8\r\5\3\f\q\p\u\c\n\v\1\x\u\o\r\b\6\2\4\p\f\7\h\x\n\2\5\y\k\6 ]] 00:06:56.291 00:06:56.291 real 0m4.867s 00:06:56.291 user 0m2.896s 00:06:56.291 sys 0m2.080s 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:56.291 ************************************ 00:06:56.291 END TEST dd_flags_misc 00:06:56.291 ************************************ 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:56.291 * Second test run, disabling liburing, forcing AIO 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:56.291 ************************************ 00:06:56.291 START TEST dd_flag_append_forced_aio 00:06:56.291 ************************************ 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=tbky4yr5l12fgostcaqmz8dfw2w1jp0g 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=z5a5n17ab0lbswrtvcj1lotm7zcaoz2x 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s tbky4yr5l12fgostcaqmz8dfw2w1jp0g 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s z5a5n17ab0lbswrtvcj1lotm7zcaoz2x 00:06:56.291 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:56.291 [2024-07-25 13:49:45.183974] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:56.291 [2024-07-25 13:49:45.184091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62328 ] 00:06:56.549 [2024-07-25 13:49:45.322507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.549 [2024-07-25 13:49:45.436904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.549 [2024-07-25 13:49:45.489518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.808  Copying: 32/32 [B] (average 31 kBps) 00:06:56.808 00:06:56.808 ************************************ 00:06:56.808 END TEST dd_flag_append_forced_aio 00:06:56.808 ************************************ 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ z5a5n17ab0lbswrtvcj1lotm7zcaoz2xtbky4yr5l12fgostcaqmz8dfw2w1jp0g == \z\5\a\5\n\1\7\a\b\0\l\b\s\w\r\t\v\c\j\1\l\o\t\m\7\z\c\a\o\z\2\x\t\b\k\y\4\y\r\5\l\1\2\f\g\o\s\t\c\a\q\m\z\8\d\f\w\2\w\1\j\p\0\g ]] 00:06:56.808 00:06:56.808 real 0m0.622s 00:06:56.808 user 0m0.355s 00:06:56.808 sys 0m0.146s 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:56.808 ************************************ 00:06:56.808 START TEST dd_flag_directory_forced_aio 00:06:56.808 ************************************ 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.808 13:49:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.066 [2024-07-25 13:49:45.843971] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:57.066 [2024-07-25 13:49:45.844074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62360 ] 00:06:57.066 [2024-07-25 13:49:45.985032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.325 [2024-07-25 13:49:46.113836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.325 [2024-07-25 13:49:46.167550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.325 [2024-07-25 13:49:46.200547] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.325 [2024-07-25 13:49:46.200602] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.325 [2024-07-25 13:49:46.200634] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.325 [2024-07-25 13:49:46.308872] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.584 13:49:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.584 [2024-07-25 13:49:46.463794] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:57.584 [2024-07-25 13:49:46.463887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62364 ] 00:06:57.584 [2024-07-25 13:49:46.602435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.842 [2024-07-25 13:49:46.719254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.842 [2024-07-25 13:49:46.772130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.842 [2024-07-25 13:49:46.806038] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.842 [2024-07-25 13:49:46.806085] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.842 [2024-07-25 13:49:46.806117] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.101 [2024-07-25 13:49:46.917969] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:58.102 ************************************ 00:06:58.102 END TEST dd_flag_directory_forced_aio 00:06:58.102 ************************************ 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.102 00:06:58.102 real 0m1.235s 00:06:58.102 user 0m0.736s 00:06:58.102 sys 0m0.289s 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:58.102 ************************************ 00:06:58.102 START TEST dd_flag_nofollow_forced_aio 00:06:58.102 ************************************ 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.102 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.360 [2024-07-25 13:49:47.143395] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:58.360 [2024-07-25 13:49:47.143507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62398 ] 00:06:58.360 [2024-07-25 13:49:47.282746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.619 [2024-07-25 13:49:47.397441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.619 [2024-07-25 13:49:47.450894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.619 [2024-07-25 13:49:47.484742] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:58.619 [2024-07-25 13:49:47.484794] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:58.619 [2024-07-25 13:49:47.484827] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.619 [2024-07-25 13:49:47.596042] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.877 13:49:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.877 [2024-07-25 13:49:47.754657] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:58.877 [2024-07-25 13:49:47.754771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62408 ] 00:06:58.877 [2024-07-25 13:49:47.893601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.199 [2024-07-25 13:49:48.009104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.199 [2024-07-25 13:49:48.061163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.199 [2024-07-25 13:49:48.094662] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:59.199 [2024-07-25 13:49:48.094732] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:59.199 [2024-07-25 13:49:48.094765] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.461 [2024-07-25 13:49:48.205720] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:59.461 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.461 [2024-07-25 13:49:48.366658] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:06:59.461 [2024-07-25 13:49:48.366758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62415 ] 00:06:59.719 [2024-07-25 13:49:48.505135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.719 [2024-07-25 13:49:48.620903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.719 [2024-07-25 13:49:48.674050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.977  Copying: 512/512 [B] (average 500 kBps) 00:06:59.977 00:06:59.977 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ xzphdbsf8d6lgtxjpis9sdj25raewb2xmyrncemloolfat515cham644y7jqyx2ee9o09fzq2n81xk9uezlx5e5qa5vs5xt7afeyki1e8qxlm5uz6jr3wq1zgk5fjhuziw9bhkgdyi85aaqyd8jzsha1zk2igwhwike3w1niithbt93q0htu210zrv32z6cjlsr99cwjb53syw3eg1tzeob3hjhv5cbysnmwun75hdth4zm67rnzva5h0ruex0zmkz3m6tkanliiw02pu2wyx9b1q0aqcq16hlsb72vk39vcfawr0oeev4eytchimzugyxgcrmidb1ct6ukgm3gk6qt3329njr2yntz44uarwo55xql97hlxouupxqvckrghncudvsamjnfi0frcthyq6xmrp8oi6swibswfrlyj5qxiuz2lv1lea826c29kcth1du9fcgtwivmvj9nkplfjpj1855pcsxmfzmecu4bxsyxybm5t88535vte1mlxtmk7 == \x\z\p\h\d\b\s\f\8\d\6\l\g\t\x\j\p\i\s\9\s\d\j\2\5\r\a\e\w\b\2\x\m\y\r\n\c\e\m\l\o\o\l\f\a\t\5\1\5\c\h\a\m\6\4\4\y\7\j\q\y\x\2\e\e\9\o\0\9\f\z\q\2\n\8\1\x\k\9\u\e\z\l\x\5\e\5\q\a\5\v\s\5\x\t\7\a\f\e\y\k\i\1\e\8\q\x\l\m\5\u\z\6\j\r\3\w\q\1\z\g\k\5\f\j\h\u\z\i\w\9\b\h\k\g\d\y\i\8\5\a\a\q\y\d\8\j\z\s\h\a\1\z\k\2\i\g\w\h\w\i\k\e\3\w\1\n\i\i\t\h\b\t\9\3\q\0\h\t\u\2\1\0\z\r\v\3\2\z\6\c\j\l\s\r\9\9\c\w\j\b\5\3\s\y\w\3\e\g\1\t\z\e\o\b\3\h\j\h\v\5\c\b\y\s\n\m\w\u\n\7\5\h\d\t\h\4\z\m\6\7\r\n\z\v\a\5\h\0\r\u\e\x\0\z\m\k\z\3\m\6\t\k\a\n\l\i\i\w\0\2\p\u\2\w\y\x\9\b\1\q\0\a\q\c\q\1\6\h\l\s\b\7\2\v\k\3\9\v\c\f\a\w\r\0\o\e\e\v\4\e\y\t\c\h\i\m\z\u\g\y\x\g\c\r\m\i\d\b\1\c\t\6\u\k\g\m\3\g\k\6\q\t\3\3\2\9\n\j\r\2\y\n\t\z\4\4\u\a\r\w\o\5\5\x\q\l\9\7\h\l\x\o\u\u\p\x\q\v\c\k\r\g\h\n\c\u\d\v\s\a\m\j\n\f\i\0\f\r\c\t\h\y\q\6\x\m\r\p\8\o\i\6\s\w\i\b\s\w\f\r\l\y\j\5\q\x\i\u\z\2\l\v\1\l\e\a\8\2\6\c\2\9\k\c\t\h\1\d\u\9\f\c\g\t\w\i\v\m\v\j\9\n\k\p\l\f\j\p\j\1\8\5\5\p\c\s\x\m\f\z\m\e\c\u\4\b\x\s\y\x\y\b\m\5\t\8\8\5\3\5\v\t\e\1\m\l\x\t\m\k\7 ]] 00:06:59.977 00:06:59.977 real 0m1.862s 00:06:59.977 user 0m1.075s 00:06:59.977 sys 0m0.451s 00:06:59.977 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.977 ************************************ 00:06:59.977 END TEST dd_flag_nofollow_forced_aio 00:06:59.977 ************************************ 00:06:59.977 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:59.977 13:49:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:59.977 13:49:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.977 13:49:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.977 13:49:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:59.977 ************************************ 00:06:59.977 START TEST dd_flag_noatime_forced_aio 00:06:59.977 ************************************ 00:06:59.977 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:59.978 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:59.978 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:59.978 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:59.978 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:59.978 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:59.978 13:49:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.978 13:49:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721915388 00:06:59.978 13:49:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.235 13:49:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721915388 00:07:00.236 13:49:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:01.171 13:49:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.171 [2024-07-25 13:49:50.068960] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:01.171 [2024-07-25 13:49:50.069348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62461 ] 00:07:01.429 [2024-07-25 13:49:50.210135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.429 [2024-07-25 13:49:50.327297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.429 [2024-07-25 13:49:50.379947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.688  Copying: 512/512 [B] (average 500 kBps) 00:07:01.688 00:07:01.688 13:49:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.688 13:49:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721915388 )) 00:07:01.688 13:49:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.688 13:49:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721915388 )) 00:07:01.688 13:49:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.688 [2024-07-25 13:49:50.708074] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:01.688 [2024-07-25 13:49:50.708163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62473 ] 00:07:01.946 [2024-07-25 13:49:50.839607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.946 [2024-07-25 13:49:50.958681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.205 [2024-07-25 13:49:51.013430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.464  Copying: 512/512 [B] (average 500 kBps) 00:07:02.464 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721915391 )) 00:07:02.464 00:07:02.464 real 0m2.329s 00:07:02.464 user 0m0.762s 00:07:02.464 sys 0m0.306s 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.464 ************************************ 00:07:02.464 END TEST dd_flag_noatime_forced_aio 00:07:02.464 ************************************ 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:02.464 ************************************ 00:07:02.464 START TEST dd_flags_misc_forced_aio 00:07:02.464 ************************************ 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.464 13:49:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:02.464 [2024-07-25 13:49:51.422661] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:02.464 [2024-07-25 13:49:51.422741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62499 ] 00:07:02.764 [2024-07-25 13:49:51.556901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.764 [2024-07-25 13:49:51.673339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.764 [2024-07-25 13:49:51.725678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.023  Copying: 512/512 [B] (average 500 kBps) 00:07:03.023 00:07:03.023 13:49:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ z5pxvvsfg86q23yypi4dg0zmfvvylvhdd9inq0yde6v1tbajwmc9npwjpvioy5mskn55ajflgbfka0frg5ajgkf83twsh3tzwfs4tw6e936siwof979ybzruj9eho1u5d4fy88lp5fxkaik7e9eoscy4x6nxtg0h1mttsx0lhq0p2nn3u5gtmyrlc4maus13u5ixwwxrewivjtx8itgtwn7uftl43hw7ygw3tr51nzhj2qhrr748psz1u1cjcuj05ehnx0s7gc2wtkei89jcvh252fjh4eaqi7op35ntj5xu8h0r9fr96hvsszgbftkb1hnqrzginltxunvc4liwy6o8ntdczybvwl3o32my9fguv82nbmilebadlil6r7jbxn0pg7dnej7jqiumog1a46s649rqselsqub8wzavqmqpf3scm2es80qjo4dmabcr9t0t3ivuoekldjc3gdj7kr3kp1t4nhfacoygbn8ds4cw8fjplb88rcy0gr5yodhr == \z\5\p\x\v\v\s\f\g\8\6\q\2\3\y\y\p\i\4\d\g\0\z\m\f\v\v\y\l\v\h\d\d\9\i\n\q\0\y\d\e\6\v\1\t\b\a\j\w\m\c\9\n\p\w\j\p\v\i\o\y\5\m\s\k\n\5\5\a\j\f\l\g\b\f\k\a\0\f\r\g\5\a\j\g\k\f\8\3\t\w\s\h\3\t\z\w\f\s\4\t\w\6\e\9\3\6\s\i\w\o\f\9\7\9\y\b\z\r\u\j\9\e\h\o\1\u\5\d\4\f\y\8\8\l\p\5\f\x\k\a\i\k\7\e\9\e\o\s\c\y\4\x\6\n\x\t\g\0\h\1\m\t\t\s\x\0\l\h\q\0\p\2\n\n\3\u\5\g\t\m\y\r\l\c\4\m\a\u\s\1\3\u\5\i\x\w\w\x\r\e\w\i\v\j\t\x\8\i\t\g\t\w\n\7\u\f\t\l\4\3\h\w\7\y\g\w\3\t\r\5\1\n\z\h\j\2\q\h\r\r\7\4\8\p\s\z\1\u\1\c\j\c\u\j\0\5\e\h\n\x\0\s\7\g\c\2\w\t\k\e\i\8\9\j\c\v\h\2\5\2\f\j\h\4\e\a\q\i\7\o\p\3\5\n\t\j\5\x\u\8\h\0\r\9\f\r\9\6\h\v\s\s\z\g\b\f\t\k\b\1\h\n\q\r\z\g\i\n\l\t\x\u\n\v\c\4\l\i\w\y\6\o\8\n\t\d\c\z\y\b\v\w\l\3\o\3\2\m\y\9\f\g\u\v\8\2\n\b\m\i\l\e\b\a\d\l\i\l\6\r\7\j\b\x\n\0\p\g\7\d\n\e\j\7\j\q\i\u\m\o\g\1\a\4\6\s\6\4\9\r\q\s\e\l\s\q\u\b\8\w\z\a\v\q\m\q\p\f\3\s\c\m\2\e\s\8\0\q\j\o\4\d\m\a\b\c\r\9\t\0\t\3\i\v\u\o\e\k\l\d\j\c\3\g\d\j\7\k\r\3\k\p\1\t\4\n\h\f\a\c\o\y\g\b\n\8\d\s\4\c\w\8\f\j\p\l\b\8\8\r\c\y\0\g\r\5\y\o\d\h\r ]] 00:07:03.023 13:49:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.023 13:49:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:03.282 [2024-07-25 13:49:52.055081] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:03.282 [2024-07-25 13:49:52.055189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62508 ] 00:07:03.282 [2024-07-25 13:49:52.194502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.559 [2024-07-25 13:49:52.313485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.559 [2024-07-25 13:49:52.365680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.818  Copying: 512/512 [B] (average 500 kBps) 00:07:03.818 00:07:03.818 13:49:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ z5pxvvsfg86q23yypi4dg0zmfvvylvhdd9inq0yde6v1tbajwmc9npwjpvioy5mskn55ajflgbfka0frg5ajgkf83twsh3tzwfs4tw6e936siwof979ybzruj9eho1u5d4fy88lp5fxkaik7e9eoscy4x6nxtg0h1mttsx0lhq0p2nn3u5gtmyrlc4maus13u5ixwwxrewivjtx8itgtwn7uftl43hw7ygw3tr51nzhj2qhrr748psz1u1cjcuj05ehnx0s7gc2wtkei89jcvh252fjh4eaqi7op35ntj5xu8h0r9fr96hvsszgbftkb1hnqrzginltxunvc4liwy6o8ntdczybvwl3o32my9fguv82nbmilebadlil6r7jbxn0pg7dnej7jqiumog1a46s649rqselsqub8wzavqmqpf3scm2es80qjo4dmabcr9t0t3ivuoekldjc3gdj7kr3kp1t4nhfacoygbn8ds4cw8fjplb88rcy0gr5yodhr == \z\5\p\x\v\v\s\f\g\8\6\q\2\3\y\y\p\i\4\d\g\0\z\m\f\v\v\y\l\v\h\d\d\9\i\n\q\0\y\d\e\6\v\1\t\b\a\j\w\m\c\9\n\p\w\j\p\v\i\o\y\5\m\s\k\n\5\5\a\j\f\l\g\b\f\k\a\0\f\r\g\5\a\j\g\k\f\8\3\t\w\s\h\3\t\z\w\f\s\4\t\w\6\e\9\3\6\s\i\w\o\f\9\7\9\y\b\z\r\u\j\9\e\h\o\1\u\5\d\4\f\y\8\8\l\p\5\f\x\k\a\i\k\7\e\9\e\o\s\c\y\4\x\6\n\x\t\g\0\h\1\m\t\t\s\x\0\l\h\q\0\p\2\n\n\3\u\5\g\t\m\y\r\l\c\4\m\a\u\s\1\3\u\5\i\x\w\w\x\r\e\w\i\v\j\t\x\8\i\t\g\t\w\n\7\u\f\t\l\4\3\h\w\7\y\g\w\3\t\r\5\1\n\z\h\j\2\q\h\r\r\7\4\8\p\s\z\1\u\1\c\j\c\u\j\0\5\e\h\n\x\0\s\7\g\c\2\w\t\k\e\i\8\9\j\c\v\h\2\5\2\f\j\h\4\e\a\q\i\7\o\p\3\5\n\t\j\5\x\u\8\h\0\r\9\f\r\9\6\h\v\s\s\z\g\b\f\t\k\b\1\h\n\q\r\z\g\i\n\l\t\x\u\n\v\c\4\l\i\w\y\6\o\8\n\t\d\c\z\y\b\v\w\l\3\o\3\2\m\y\9\f\g\u\v\8\2\n\b\m\i\l\e\b\a\d\l\i\l\6\r\7\j\b\x\n\0\p\g\7\d\n\e\j\7\j\q\i\u\m\o\g\1\a\4\6\s\6\4\9\r\q\s\e\l\s\q\u\b\8\w\z\a\v\q\m\q\p\f\3\s\c\m\2\e\s\8\0\q\j\o\4\d\m\a\b\c\r\9\t\0\t\3\i\v\u\o\e\k\l\d\j\c\3\g\d\j\7\k\r\3\k\p\1\t\4\n\h\f\a\c\o\y\g\b\n\8\d\s\4\c\w\8\f\j\p\l\b\8\8\r\c\y\0\g\r\5\y\o\d\h\r ]] 00:07:03.818 13:49:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.818 13:49:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:03.818 [2024-07-25 13:49:52.682686] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:03.818 [2024-07-25 13:49:52.682787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62521 ] 00:07:03.818 [2024-07-25 13:49:52.821514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.076 [2024-07-25 13:49:52.941316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.076 [2024-07-25 13:49:52.993818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.334  Copying: 512/512 [B] (average 35 kBps) 00:07:04.334 00:07:04.334 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ z5pxvvsfg86q23yypi4dg0zmfvvylvhdd9inq0yde6v1tbajwmc9npwjpvioy5mskn55ajflgbfka0frg5ajgkf83twsh3tzwfs4tw6e936siwof979ybzruj9eho1u5d4fy88lp5fxkaik7e9eoscy4x6nxtg0h1mttsx0lhq0p2nn3u5gtmyrlc4maus13u5ixwwxrewivjtx8itgtwn7uftl43hw7ygw3tr51nzhj2qhrr748psz1u1cjcuj05ehnx0s7gc2wtkei89jcvh252fjh4eaqi7op35ntj5xu8h0r9fr96hvsszgbftkb1hnqrzginltxunvc4liwy6o8ntdczybvwl3o32my9fguv82nbmilebadlil6r7jbxn0pg7dnej7jqiumog1a46s649rqselsqub8wzavqmqpf3scm2es80qjo4dmabcr9t0t3ivuoekldjc3gdj7kr3kp1t4nhfacoygbn8ds4cw8fjplb88rcy0gr5yodhr == \z\5\p\x\v\v\s\f\g\8\6\q\2\3\y\y\p\i\4\d\g\0\z\m\f\v\v\y\l\v\h\d\d\9\i\n\q\0\y\d\e\6\v\1\t\b\a\j\w\m\c\9\n\p\w\j\p\v\i\o\y\5\m\s\k\n\5\5\a\j\f\l\g\b\f\k\a\0\f\r\g\5\a\j\g\k\f\8\3\t\w\s\h\3\t\z\w\f\s\4\t\w\6\e\9\3\6\s\i\w\o\f\9\7\9\y\b\z\r\u\j\9\e\h\o\1\u\5\d\4\f\y\8\8\l\p\5\f\x\k\a\i\k\7\e\9\e\o\s\c\y\4\x\6\n\x\t\g\0\h\1\m\t\t\s\x\0\l\h\q\0\p\2\n\n\3\u\5\g\t\m\y\r\l\c\4\m\a\u\s\1\3\u\5\i\x\w\w\x\r\e\w\i\v\j\t\x\8\i\t\g\t\w\n\7\u\f\t\l\4\3\h\w\7\y\g\w\3\t\r\5\1\n\z\h\j\2\q\h\r\r\7\4\8\p\s\z\1\u\1\c\j\c\u\j\0\5\e\h\n\x\0\s\7\g\c\2\w\t\k\e\i\8\9\j\c\v\h\2\5\2\f\j\h\4\e\a\q\i\7\o\p\3\5\n\t\j\5\x\u\8\h\0\r\9\f\r\9\6\h\v\s\s\z\g\b\f\t\k\b\1\h\n\q\r\z\g\i\n\l\t\x\u\n\v\c\4\l\i\w\y\6\o\8\n\t\d\c\z\y\b\v\w\l\3\o\3\2\m\y\9\f\g\u\v\8\2\n\b\m\i\l\e\b\a\d\l\i\l\6\r\7\j\b\x\n\0\p\g\7\d\n\e\j\7\j\q\i\u\m\o\g\1\a\4\6\s\6\4\9\r\q\s\e\l\s\q\u\b\8\w\z\a\v\q\m\q\p\f\3\s\c\m\2\e\s\8\0\q\j\o\4\d\m\a\b\c\r\9\t\0\t\3\i\v\u\o\e\k\l\d\j\c\3\g\d\j\7\k\r\3\k\p\1\t\4\n\h\f\a\c\o\y\g\b\n\8\d\s\4\c\w\8\f\j\p\l\b\8\8\r\c\y\0\g\r\5\y\o\d\h\r ]] 00:07:04.334 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.334 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:04.334 [2024-07-25 13:49:53.331269] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:04.334 [2024-07-25 13:49:53.331392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62528 ] 00:07:04.591 [2024-07-25 13:49:53.466668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.591 [2024-07-25 13:49:53.582997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.848 [2024-07-25 13:49:53.634800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.107  Copying: 512/512 [B] (average 500 kBps) 00:07:05.107 00:07:05.107 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ z5pxvvsfg86q23yypi4dg0zmfvvylvhdd9inq0yde6v1tbajwmc9npwjpvioy5mskn55ajflgbfka0frg5ajgkf83twsh3tzwfs4tw6e936siwof979ybzruj9eho1u5d4fy88lp5fxkaik7e9eoscy4x6nxtg0h1mttsx0lhq0p2nn3u5gtmyrlc4maus13u5ixwwxrewivjtx8itgtwn7uftl43hw7ygw3tr51nzhj2qhrr748psz1u1cjcuj05ehnx0s7gc2wtkei89jcvh252fjh4eaqi7op35ntj5xu8h0r9fr96hvsszgbftkb1hnqrzginltxunvc4liwy6o8ntdczybvwl3o32my9fguv82nbmilebadlil6r7jbxn0pg7dnej7jqiumog1a46s649rqselsqub8wzavqmqpf3scm2es80qjo4dmabcr9t0t3ivuoekldjc3gdj7kr3kp1t4nhfacoygbn8ds4cw8fjplb88rcy0gr5yodhr == \z\5\p\x\v\v\s\f\g\8\6\q\2\3\y\y\p\i\4\d\g\0\z\m\f\v\v\y\l\v\h\d\d\9\i\n\q\0\y\d\e\6\v\1\t\b\a\j\w\m\c\9\n\p\w\j\p\v\i\o\y\5\m\s\k\n\5\5\a\j\f\l\g\b\f\k\a\0\f\r\g\5\a\j\g\k\f\8\3\t\w\s\h\3\t\z\w\f\s\4\t\w\6\e\9\3\6\s\i\w\o\f\9\7\9\y\b\z\r\u\j\9\e\h\o\1\u\5\d\4\f\y\8\8\l\p\5\f\x\k\a\i\k\7\e\9\e\o\s\c\y\4\x\6\n\x\t\g\0\h\1\m\t\t\s\x\0\l\h\q\0\p\2\n\n\3\u\5\g\t\m\y\r\l\c\4\m\a\u\s\1\3\u\5\i\x\w\w\x\r\e\w\i\v\j\t\x\8\i\t\g\t\w\n\7\u\f\t\l\4\3\h\w\7\y\g\w\3\t\r\5\1\n\z\h\j\2\q\h\r\r\7\4\8\p\s\z\1\u\1\c\j\c\u\j\0\5\e\h\n\x\0\s\7\g\c\2\w\t\k\e\i\8\9\j\c\v\h\2\5\2\f\j\h\4\e\a\q\i\7\o\p\3\5\n\t\j\5\x\u\8\h\0\r\9\f\r\9\6\h\v\s\s\z\g\b\f\t\k\b\1\h\n\q\r\z\g\i\n\l\t\x\u\n\v\c\4\l\i\w\y\6\o\8\n\t\d\c\z\y\b\v\w\l\3\o\3\2\m\y\9\f\g\u\v\8\2\n\b\m\i\l\e\b\a\d\l\i\l\6\r\7\j\b\x\n\0\p\g\7\d\n\e\j\7\j\q\i\u\m\o\g\1\a\4\6\s\6\4\9\r\q\s\e\l\s\q\u\b\8\w\z\a\v\q\m\q\p\f\3\s\c\m\2\e\s\8\0\q\j\o\4\d\m\a\b\c\r\9\t\0\t\3\i\v\u\o\e\k\l\d\j\c\3\g\d\j\7\k\r\3\k\p\1\t\4\n\h\f\a\c\o\y\g\b\n\8\d\s\4\c\w\8\f\j\p\l\b\8\8\r\c\y\0\g\r\5\y\o\d\h\r ]] 00:07:05.107 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:05.107 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:05.107 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:05.107 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:05.107 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.107 13:49:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:05.107 [2024-07-25 13:49:53.954469] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:05.107 [2024-07-25 13:49:53.954571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62536 ] 00:07:05.107 [2024-07-25 13:49:54.093271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.365 [2024-07-25 13:49:54.211586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.365 [2024-07-25 13:49:54.265284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.624  Copying: 512/512 [B] (average 500 kBps) 00:07:05.624 00:07:05.624 13:49:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ktjaoe13l0dgu46h2pocvz1dpepa4u6ovsfh3e2htqwx71ze8io7cm9nv1s10fplunk1zic1gfcbxe1rlkwshokb7acpgreqsfb9cyx6bfmij1rfd25seltgz9dh0t8ph3nd0291zv432xbi6pvrb57zpey4mzl9tjxkaj3b0cr98k5u45i43habf6baa4qd5pg45aiaqpifyauaxxjksj2bpwbi0e9twqouc64c6mfp24vhc0yx4qtxrcfn0hxw3bxwvc37al1utb8nunaorylwf40kwueuzyqs24ybbij5wutfm5p1rc81age6lw7aziz75ufatjh0hcdwuj7eh1swd28uulw3ype11hca7e2et8gu80vwdfe3zjaqqdc6vzumu30pvhpydiwjtm1b1efczr73nx1jy7jxfgmz9n6a117s0d70a7vt0hhplhklbsgt7klpo7h5ynpa1lxx03mi72oo0jh4hjma1ga6vaicwpj74en1euwxutqrbhuy == \k\t\j\a\o\e\1\3\l\0\d\g\u\4\6\h\2\p\o\c\v\z\1\d\p\e\p\a\4\u\6\o\v\s\f\h\3\e\2\h\t\q\w\x\7\1\z\e\8\i\o\7\c\m\9\n\v\1\s\1\0\f\p\l\u\n\k\1\z\i\c\1\g\f\c\b\x\e\1\r\l\k\w\s\h\o\k\b\7\a\c\p\g\r\e\q\s\f\b\9\c\y\x\6\b\f\m\i\j\1\r\f\d\2\5\s\e\l\t\g\z\9\d\h\0\t\8\p\h\3\n\d\0\2\9\1\z\v\4\3\2\x\b\i\6\p\v\r\b\5\7\z\p\e\y\4\m\z\l\9\t\j\x\k\a\j\3\b\0\c\r\9\8\k\5\u\4\5\i\4\3\h\a\b\f\6\b\a\a\4\q\d\5\p\g\4\5\a\i\a\q\p\i\f\y\a\u\a\x\x\j\k\s\j\2\b\p\w\b\i\0\e\9\t\w\q\o\u\c\6\4\c\6\m\f\p\2\4\v\h\c\0\y\x\4\q\t\x\r\c\f\n\0\h\x\w\3\b\x\w\v\c\3\7\a\l\1\u\t\b\8\n\u\n\a\o\r\y\l\w\f\4\0\k\w\u\e\u\z\y\q\s\2\4\y\b\b\i\j\5\w\u\t\f\m\5\p\1\r\c\8\1\a\g\e\6\l\w\7\a\z\i\z\7\5\u\f\a\t\j\h\0\h\c\d\w\u\j\7\e\h\1\s\w\d\2\8\u\u\l\w\3\y\p\e\1\1\h\c\a\7\e\2\e\t\8\g\u\8\0\v\w\d\f\e\3\z\j\a\q\q\d\c\6\v\z\u\m\u\3\0\p\v\h\p\y\d\i\w\j\t\m\1\b\1\e\f\c\z\r\7\3\n\x\1\j\y\7\j\x\f\g\m\z\9\n\6\a\1\1\7\s\0\d\7\0\a\7\v\t\0\h\h\p\l\h\k\l\b\s\g\t\7\k\l\p\o\7\h\5\y\n\p\a\1\l\x\x\0\3\m\i\7\2\o\o\0\j\h\4\h\j\m\a\1\g\a\6\v\a\i\c\w\p\j\7\4\e\n\1\e\u\w\x\u\t\q\r\b\h\u\y ]] 00:07:05.624 13:49:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.624 13:49:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:05.624 [2024-07-25 13:49:54.588955] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:05.624 [2024-07-25 13:49:54.589333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62543 ] 00:07:05.882 [2024-07-25 13:49:54.727667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.882 [2024-07-25 13:49:54.843734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.882 [2024-07-25 13:49:54.896408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.140  Copying: 512/512 [B] (average 500 kBps) 00:07:06.140 00:07:06.140 13:49:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ktjaoe13l0dgu46h2pocvz1dpepa4u6ovsfh3e2htqwx71ze8io7cm9nv1s10fplunk1zic1gfcbxe1rlkwshokb7acpgreqsfb9cyx6bfmij1rfd25seltgz9dh0t8ph3nd0291zv432xbi6pvrb57zpey4mzl9tjxkaj3b0cr98k5u45i43habf6baa4qd5pg45aiaqpifyauaxxjksj2bpwbi0e9twqouc64c6mfp24vhc0yx4qtxrcfn0hxw3bxwvc37al1utb8nunaorylwf40kwueuzyqs24ybbij5wutfm5p1rc81age6lw7aziz75ufatjh0hcdwuj7eh1swd28uulw3ype11hca7e2et8gu80vwdfe3zjaqqdc6vzumu30pvhpydiwjtm1b1efczr73nx1jy7jxfgmz9n6a117s0d70a7vt0hhplhklbsgt7klpo7h5ynpa1lxx03mi72oo0jh4hjma1ga6vaicwpj74en1euwxutqrbhuy == \k\t\j\a\o\e\1\3\l\0\d\g\u\4\6\h\2\p\o\c\v\z\1\d\p\e\p\a\4\u\6\o\v\s\f\h\3\e\2\h\t\q\w\x\7\1\z\e\8\i\o\7\c\m\9\n\v\1\s\1\0\f\p\l\u\n\k\1\z\i\c\1\g\f\c\b\x\e\1\r\l\k\w\s\h\o\k\b\7\a\c\p\g\r\e\q\s\f\b\9\c\y\x\6\b\f\m\i\j\1\r\f\d\2\5\s\e\l\t\g\z\9\d\h\0\t\8\p\h\3\n\d\0\2\9\1\z\v\4\3\2\x\b\i\6\p\v\r\b\5\7\z\p\e\y\4\m\z\l\9\t\j\x\k\a\j\3\b\0\c\r\9\8\k\5\u\4\5\i\4\3\h\a\b\f\6\b\a\a\4\q\d\5\p\g\4\5\a\i\a\q\p\i\f\y\a\u\a\x\x\j\k\s\j\2\b\p\w\b\i\0\e\9\t\w\q\o\u\c\6\4\c\6\m\f\p\2\4\v\h\c\0\y\x\4\q\t\x\r\c\f\n\0\h\x\w\3\b\x\w\v\c\3\7\a\l\1\u\t\b\8\n\u\n\a\o\r\y\l\w\f\4\0\k\w\u\e\u\z\y\q\s\2\4\y\b\b\i\j\5\w\u\t\f\m\5\p\1\r\c\8\1\a\g\e\6\l\w\7\a\z\i\z\7\5\u\f\a\t\j\h\0\h\c\d\w\u\j\7\e\h\1\s\w\d\2\8\u\u\l\w\3\y\p\e\1\1\h\c\a\7\e\2\e\t\8\g\u\8\0\v\w\d\f\e\3\z\j\a\q\q\d\c\6\v\z\u\m\u\3\0\p\v\h\p\y\d\i\w\j\t\m\1\b\1\e\f\c\z\r\7\3\n\x\1\j\y\7\j\x\f\g\m\z\9\n\6\a\1\1\7\s\0\d\7\0\a\7\v\t\0\h\h\p\l\h\k\l\b\s\g\t\7\k\l\p\o\7\h\5\y\n\p\a\1\l\x\x\0\3\m\i\7\2\o\o\0\j\h\4\h\j\m\a\1\g\a\6\v\a\i\c\w\p\j\7\4\e\n\1\e\u\w\x\u\t\q\r\b\h\u\y ]] 00:07:06.140 13:49:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.140 13:49:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:06.398 [2024-07-25 13:49:55.208575] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:06.398 [2024-07-25 13:49:55.208689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62551 ] 00:07:06.398 [2024-07-25 13:49:55.346464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.656 [2024-07-25 13:49:55.461813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.656 [2024-07-25 13:49:55.513740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.914  Copying: 512/512 [B] (average 250 kBps) 00:07:06.914 00:07:06.914 13:49:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ktjaoe13l0dgu46h2pocvz1dpepa4u6ovsfh3e2htqwx71ze8io7cm9nv1s10fplunk1zic1gfcbxe1rlkwshokb7acpgreqsfb9cyx6bfmij1rfd25seltgz9dh0t8ph3nd0291zv432xbi6pvrb57zpey4mzl9tjxkaj3b0cr98k5u45i43habf6baa4qd5pg45aiaqpifyauaxxjksj2bpwbi0e9twqouc64c6mfp24vhc0yx4qtxrcfn0hxw3bxwvc37al1utb8nunaorylwf40kwueuzyqs24ybbij5wutfm5p1rc81age6lw7aziz75ufatjh0hcdwuj7eh1swd28uulw3ype11hca7e2et8gu80vwdfe3zjaqqdc6vzumu30pvhpydiwjtm1b1efczr73nx1jy7jxfgmz9n6a117s0d70a7vt0hhplhklbsgt7klpo7h5ynpa1lxx03mi72oo0jh4hjma1ga6vaicwpj74en1euwxutqrbhuy == \k\t\j\a\o\e\1\3\l\0\d\g\u\4\6\h\2\p\o\c\v\z\1\d\p\e\p\a\4\u\6\o\v\s\f\h\3\e\2\h\t\q\w\x\7\1\z\e\8\i\o\7\c\m\9\n\v\1\s\1\0\f\p\l\u\n\k\1\z\i\c\1\g\f\c\b\x\e\1\r\l\k\w\s\h\o\k\b\7\a\c\p\g\r\e\q\s\f\b\9\c\y\x\6\b\f\m\i\j\1\r\f\d\2\5\s\e\l\t\g\z\9\d\h\0\t\8\p\h\3\n\d\0\2\9\1\z\v\4\3\2\x\b\i\6\p\v\r\b\5\7\z\p\e\y\4\m\z\l\9\t\j\x\k\a\j\3\b\0\c\r\9\8\k\5\u\4\5\i\4\3\h\a\b\f\6\b\a\a\4\q\d\5\p\g\4\5\a\i\a\q\p\i\f\y\a\u\a\x\x\j\k\s\j\2\b\p\w\b\i\0\e\9\t\w\q\o\u\c\6\4\c\6\m\f\p\2\4\v\h\c\0\y\x\4\q\t\x\r\c\f\n\0\h\x\w\3\b\x\w\v\c\3\7\a\l\1\u\t\b\8\n\u\n\a\o\r\y\l\w\f\4\0\k\w\u\e\u\z\y\q\s\2\4\y\b\b\i\j\5\w\u\t\f\m\5\p\1\r\c\8\1\a\g\e\6\l\w\7\a\z\i\z\7\5\u\f\a\t\j\h\0\h\c\d\w\u\j\7\e\h\1\s\w\d\2\8\u\u\l\w\3\y\p\e\1\1\h\c\a\7\e\2\e\t\8\g\u\8\0\v\w\d\f\e\3\z\j\a\q\q\d\c\6\v\z\u\m\u\3\0\p\v\h\p\y\d\i\w\j\t\m\1\b\1\e\f\c\z\r\7\3\n\x\1\j\y\7\j\x\f\g\m\z\9\n\6\a\1\1\7\s\0\d\7\0\a\7\v\t\0\h\h\p\l\h\k\l\b\s\g\t\7\k\l\p\o\7\h\5\y\n\p\a\1\l\x\x\0\3\m\i\7\2\o\o\0\j\h\4\h\j\m\a\1\g\a\6\v\a\i\c\w\p\j\7\4\e\n\1\e\u\w\x\u\t\q\r\b\h\u\y ]] 00:07:06.914 13:49:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.914 13:49:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:06.914 [2024-07-25 13:49:55.829557] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:06.914 [2024-07-25 13:49:55.829671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62564 ] 00:07:07.172 [2024-07-25 13:49:55.966411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.172 [2024-07-25 13:49:56.079889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.172 [2024-07-25 13:49:56.132126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.431  Copying: 512/512 [B] (average 500 kBps) 00:07:07.431 00:07:07.431 13:49:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ktjaoe13l0dgu46h2pocvz1dpepa4u6ovsfh3e2htqwx71ze8io7cm9nv1s10fplunk1zic1gfcbxe1rlkwshokb7acpgreqsfb9cyx6bfmij1rfd25seltgz9dh0t8ph3nd0291zv432xbi6pvrb57zpey4mzl9tjxkaj3b0cr98k5u45i43habf6baa4qd5pg45aiaqpifyauaxxjksj2bpwbi0e9twqouc64c6mfp24vhc0yx4qtxrcfn0hxw3bxwvc37al1utb8nunaorylwf40kwueuzyqs24ybbij5wutfm5p1rc81age6lw7aziz75ufatjh0hcdwuj7eh1swd28uulw3ype11hca7e2et8gu80vwdfe3zjaqqdc6vzumu30pvhpydiwjtm1b1efczr73nx1jy7jxfgmz9n6a117s0d70a7vt0hhplhklbsgt7klpo7h5ynpa1lxx03mi72oo0jh4hjma1ga6vaicwpj74en1euwxutqrbhuy == \k\t\j\a\o\e\1\3\l\0\d\g\u\4\6\h\2\p\o\c\v\z\1\d\p\e\p\a\4\u\6\o\v\s\f\h\3\e\2\h\t\q\w\x\7\1\z\e\8\i\o\7\c\m\9\n\v\1\s\1\0\f\p\l\u\n\k\1\z\i\c\1\g\f\c\b\x\e\1\r\l\k\w\s\h\o\k\b\7\a\c\p\g\r\e\q\s\f\b\9\c\y\x\6\b\f\m\i\j\1\r\f\d\2\5\s\e\l\t\g\z\9\d\h\0\t\8\p\h\3\n\d\0\2\9\1\z\v\4\3\2\x\b\i\6\p\v\r\b\5\7\z\p\e\y\4\m\z\l\9\t\j\x\k\a\j\3\b\0\c\r\9\8\k\5\u\4\5\i\4\3\h\a\b\f\6\b\a\a\4\q\d\5\p\g\4\5\a\i\a\q\p\i\f\y\a\u\a\x\x\j\k\s\j\2\b\p\w\b\i\0\e\9\t\w\q\o\u\c\6\4\c\6\m\f\p\2\4\v\h\c\0\y\x\4\q\t\x\r\c\f\n\0\h\x\w\3\b\x\w\v\c\3\7\a\l\1\u\t\b\8\n\u\n\a\o\r\y\l\w\f\4\0\k\w\u\e\u\z\y\q\s\2\4\y\b\b\i\j\5\w\u\t\f\m\5\p\1\r\c\8\1\a\g\e\6\l\w\7\a\z\i\z\7\5\u\f\a\t\j\h\0\h\c\d\w\u\j\7\e\h\1\s\w\d\2\8\u\u\l\w\3\y\p\e\1\1\h\c\a\7\e\2\e\t\8\g\u\8\0\v\w\d\f\e\3\z\j\a\q\q\d\c\6\v\z\u\m\u\3\0\p\v\h\p\y\d\i\w\j\t\m\1\b\1\e\f\c\z\r\7\3\n\x\1\j\y\7\j\x\f\g\m\z\9\n\6\a\1\1\7\s\0\d\7\0\a\7\v\t\0\h\h\p\l\h\k\l\b\s\g\t\7\k\l\p\o\7\h\5\y\n\p\a\1\l\x\x\0\3\m\i\7\2\o\o\0\j\h\4\h\j\m\a\1\g\a\6\v\a\i\c\w\p\j\7\4\e\n\1\e\u\w\x\u\t\q\r\b\h\u\y ]] 00:07:07.431 00:07:07.431 real 0m5.038s 00:07:07.431 user 0m2.893s 00:07:07.431 sys 0m1.137s 00:07:07.431 13:49:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.431 ************************************ 00:07:07.431 END TEST dd_flags_misc_forced_aio 00:07:07.431 ************************************ 00:07:07.431 13:49:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.431 13:49:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:07.431 13:49:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.431 13:49:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.431 ************************************ 00:07:07.431 END TEST spdk_dd_posix 00:07:07.431 ************************************ 00:07:07.431 00:07:07.431 real 0m22.525s 00:07:07.431 user 0m11.851s 00:07:07.431 sys 0m6.431s 00:07:07.431 13:49:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.431 13:49:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:07.690 13:49:56 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:07.690 13:49:56 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.690 13:49:56 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.690 13:49:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:07.690 ************************************ 00:07:07.690 START TEST spdk_dd_malloc 00:07:07.690 ************************************ 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:07.690 * Looking for test storage... 00:07:07.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:07.690 ************************************ 00:07:07.690 START TEST dd_malloc_copy 00:07:07.690 ************************************ 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:07.690 13:49:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:07.690 [2024-07-25 13:49:56.651862] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:07.690 [2024-07-25 13:49:56.652617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62632 ] 00:07:07.690 { 00:07:07.690 "subsystems": [ 00:07:07.690 { 00:07:07.690 "subsystem": "bdev", 00:07:07.690 "config": [ 00:07:07.690 { 00:07:07.690 "params": { 00:07:07.690 "block_size": 512, 00:07:07.690 "num_blocks": 1048576, 00:07:07.690 "name": "malloc0" 00:07:07.690 }, 00:07:07.690 "method": "bdev_malloc_create" 00:07:07.690 }, 00:07:07.690 { 00:07:07.690 "params": { 00:07:07.690 "block_size": 512, 00:07:07.690 "num_blocks": 1048576, 00:07:07.690 "name": "malloc1" 00:07:07.690 }, 00:07:07.690 "method": "bdev_malloc_create" 00:07:07.690 }, 00:07:07.690 { 00:07:07.690 "method": "bdev_wait_for_examine" 00:07:07.690 } 00:07:07.690 ] 00:07:07.690 } 00:07:07.690 ] 00:07:07.690 } 00:07:07.949 [2024-07-25 13:49:56.790314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.949 [2024-07-25 13:49:56.903341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.949 [2024-07-25 13:49:56.956114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.765  Copying: 198/512 [MB] (198 MBps) Copying: 397/512 [MB] (198 MBps) Copying: 512/512 [MB] (average 196 MBps) 00:07:11.765 00:07:11.765 13:50:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:11.765 13:50:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:11.765 13:50:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:11.765 13:50:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.765 [2024-07-25 13:50:00.551941] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:11.765 [2024-07-25 13:50:00.552046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62683 ] 00:07:11.765 { 00:07:11.765 "subsystems": [ 00:07:11.765 { 00:07:11.765 "subsystem": "bdev", 00:07:11.765 "config": [ 00:07:11.765 { 00:07:11.765 "params": { 00:07:11.765 "block_size": 512, 00:07:11.765 "num_blocks": 1048576, 00:07:11.765 "name": "malloc0" 00:07:11.765 }, 00:07:11.765 "method": "bdev_malloc_create" 00:07:11.765 }, 00:07:11.765 { 00:07:11.765 "params": { 00:07:11.765 "block_size": 512, 00:07:11.765 "num_blocks": 1048576, 00:07:11.765 "name": "malloc1" 00:07:11.765 }, 00:07:11.765 "method": "bdev_malloc_create" 00:07:11.765 }, 00:07:11.765 { 00:07:11.765 "method": "bdev_wait_for_examine" 00:07:11.765 } 00:07:11.765 ] 00:07:11.765 } 00:07:11.765 ] 00:07:11.765 } 00:07:11.765 [2024-07-25 13:50:00.690031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.023 [2024-07-25 13:50:00.808584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.023 [2024-07-25 13:50:00.864094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.469  Copying: 206/512 [MB] (206 MBps) Copying: 419/512 [MB] (213 MBps) Copying: 512/512 [MB] (average 209 MBps) 00:07:15.469 00:07:15.469 00:07:15.469 real 0m7.661s 00:07:15.469 user 0m6.659s 00:07:15.469 sys 0m0.835s 00:07:15.469 13:50:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.469 ************************************ 00:07:15.469 END TEST dd_malloc_copy 00:07:15.469 ************************************ 00:07:15.469 13:50:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.469 ************************************ 00:07:15.469 END TEST spdk_dd_malloc 00:07:15.469 ************************************ 00:07:15.469 00:07:15.469 real 0m7.798s 00:07:15.469 user 0m6.713s 00:07:15.469 sys 0m0.916s 00:07:15.469 13:50:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.469 13:50:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:15.469 13:50:04 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:15.469 13:50:04 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:15.469 13:50:04 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.469 13:50:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:15.469 ************************************ 00:07:15.469 START TEST spdk_dd_bdev_to_bdev 00:07:15.469 ************************************ 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:15.469 * Looking for test storage... 00:07:15.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:15.469 ************************************ 00:07:15.469 START TEST dd_inflate_file 00:07:15.469 ************************************ 00:07:15.469 13:50:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:15.728 [2024-07-25 13:50:04.510545] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:15.728 [2024-07-25 13:50:04.510670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62790 ] 00:07:15.728 [2024-07-25 13:50:04.646981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.728 [2024-07-25 13:50:04.748229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.986 [2024-07-25 13:50:04.804410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.245  Copying: 64/64 [MB] (average 1600 MBps) 00:07:16.245 00:07:16.245 ************************************ 00:07:16.245 END TEST dd_inflate_file 00:07:16.245 ************************************ 00:07:16.245 00:07:16.245 real 0m0.632s 00:07:16.245 user 0m0.377s 00:07:16.245 sys 0m0.305s 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 ************************************ 00:07:16.245 START TEST dd_copy_to_out_bdev 00:07:16.245 ************************************ 00:07:16.245 13:50:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:16.245 { 00:07:16.245 "subsystems": [ 00:07:16.245 { 00:07:16.245 "subsystem": "bdev", 00:07:16.245 "config": [ 00:07:16.245 { 00:07:16.245 "params": { 00:07:16.245 "trtype": "pcie", 00:07:16.245 "traddr": "0000:00:10.0", 00:07:16.245 "name": "Nvme0" 00:07:16.245 }, 00:07:16.245 "method": "bdev_nvme_attach_controller" 00:07:16.245 }, 00:07:16.245 { 00:07:16.245 "params": { 00:07:16.245 "trtype": "pcie", 00:07:16.245 "traddr": "0000:00:11.0", 00:07:16.245 "name": "Nvme1" 00:07:16.245 }, 00:07:16.245 "method": "bdev_nvme_attach_controller" 00:07:16.245 }, 00:07:16.245 { 00:07:16.245 "method": "bdev_wait_for_examine" 00:07:16.245 } 00:07:16.245 ] 00:07:16.245 } 00:07:16.245 ] 00:07:16.245 } 00:07:16.245 [2024-07-25 13:50:05.194791] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:16.245 [2024-07-25 13:50:05.194901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62829 ] 00:07:16.504 [2024-07-25 13:50:05.334784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.504 [2024-07-25 13:50:05.434119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.504 [2024-07-25 13:50:05.491200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.141  Copying: 58/64 [MB] (58 MBps) Copying: 64/64 [MB] (average 58 MBps) 00:07:18.141 00:07:18.141 ************************************ 00:07:18.141 END TEST dd_copy_to_out_bdev 00:07:18.141 ************************************ 00:07:18.141 00:07:18.141 real 0m1.882s 00:07:18.141 user 0m1.639s 00:07:18.141 sys 0m1.444s 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:18.141 ************************************ 00:07:18.141 START TEST dd_offset_magic 00:07:18.141 ************************************ 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:18.141 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:18.141 [2024-07-25 13:50:07.116089] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:18.141 [2024-07-25 13:50:07.116164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62874 ] 00:07:18.141 { 00:07:18.141 "subsystems": [ 00:07:18.141 { 00:07:18.141 "subsystem": "bdev", 00:07:18.141 "config": [ 00:07:18.141 { 00:07:18.141 "params": { 00:07:18.141 "trtype": "pcie", 00:07:18.141 "traddr": "0000:00:10.0", 00:07:18.141 "name": "Nvme0" 00:07:18.141 }, 00:07:18.141 "method": "bdev_nvme_attach_controller" 00:07:18.141 }, 00:07:18.141 { 00:07:18.141 "params": { 00:07:18.141 "trtype": "pcie", 00:07:18.141 "traddr": "0000:00:11.0", 00:07:18.141 "name": "Nvme1" 00:07:18.141 }, 00:07:18.141 "method": "bdev_nvme_attach_controller" 00:07:18.141 }, 00:07:18.141 { 00:07:18.141 "method": "bdev_wait_for_examine" 00:07:18.141 } 00:07:18.141 ] 00:07:18.141 } 00:07:18.141 ] 00:07:18.141 } 00:07:18.400 [2024-07-25 13:50:07.247435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.400 [2024-07-25 13:50:07.371479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.400 [2024-07-25 13:50:07.429526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.918  Copying: 65/65 [MB] (average 915 MBps) 00:07:18.918 00:07:18.918 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:18.918 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:18.918 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:18.918 13:50:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:19.180 [2024-07-25 13:50:07.990310] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:19.180 [2024-07-25 13:50:07.990433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62894 ] 00:07:19.180 { 00:07:19.180 "subsystems": [ 00:07:19.180 { 00:07:19.180 "subsystem": "bdev", 00:07:19.180 "config": [ 00:07:19.180 { 00:07:19.180 "params": { 00:07:19.180 "trtype": "pcie", 00:07:19.180 "traddr": "0000:00:10.0", 00:07:19.180 "name": "Nvme0" 00:07:19.180 }, 00:07:19.180 "method": "bdev_nvme_attach_controller" 00:07:19.180 }, 00:07:19.180 { 00:07:19.180 "params": { 00:07:19.180 "trtype": "pcie", 00:07:19.180 "traddr": "0000:00:11.0", 00:07:19.180 "name": "Nvme1" 00:07:19.180 }, 00:07:19.180 "method": "bdev_nvme_attach_controller" 00:07:19.180 }, 00:07:19.180 { 00:07:19.180 "method": "bdev_wait_for_examine" 00:07:19.180 } 00:07:19.180 ] 00:07:19.180 } 00:07:19.180 ] 00:07:19.180 } 00:07:19.180 [2024-07-25 13:50:08.126564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.439 [2024-07-25 13:50:08.246038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.439 [2024-07-25 13:50:08.300923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.699  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:19.699 00:07:19.699 13:50:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:19.699 13:50:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:19.699 13:50:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:19.699 13:50:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:19.699 13:50:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:19.699 13:50:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:19.699 13:50:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:19.958 [2024-07-25 13:50:08.750946] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:19.958 [2024-07-25 13:50:08.751041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62905 ] 00:07:19.958 { 00:07:19.958 "subsystems": [ 00:07:19.958 { 00:07:19.958 "subsystem": "bdev", 00:07:19.958 "config": [ 00:07:19.958 { 00:07:19.958 "params": { 00:07:19.958 "trtype": "pcie", 00:07:19.958 "traddr": "0000:00:10.0", 00:07:19.958 "name": "Nvme0" 00:07:19.958 }, 00:07:19.958 "method": "bdev_nvme_attach_controller" 00:07:19.958 }, 00:07:19.958 { 00:07:19.958 "params": { 00:07:19.958 "trtype": "pcie", 00:07:19.958 "traddr": "0000:00:11.0", 00:07:19.958 "name": "Nvme1" 00:07:19.958 }, 00:07:19.958 "method": "bdev_nvme_attach_controller" 00:07:19.958 }, 00:07:19.958 { 00:07:19.958 "method": "bdev_wait_for_examine" 00:07:19.958 } 00:07:19.958 ] 00:07:19.958 } 00:07:19.958 ] 00:07:19.958 } 00:07:19.958 [2024-07-25 13:50:08.891239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.217 [2024-07-25 13:50:09.017024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.217 [2024-07-25 13:50:09.072595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.751  Copying: 65/65 [MB] (average 1015 MBps) 00:07:20.751 00:07:20.751 13:50:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:20.751 13:50:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:20.751 13:50:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:20.751 13:50:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:20.751 [2024-07-25 13:50:09.629388] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:20.751 [2024-07-25 13:50:09.629941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62925 ] 00:07:20.751 { 00:07:20.751 "subsystems": [ 00:07:20.751 { 00:07:20.751 "subsystem": "bdev", 00:07:20.751 "config": [ 00:07:20.751 { 00:07:20.751 "params": { 00:07:20.751 "trtype": "pcie", 00:07:20.751 "traddr": "0000:00:10.0", 00:07:20.751 "name": "Nvme0" 00:07:20.751 }, 00:07:20.751 "method": "bdev_nvme_attach_controller" 00:07:20.751 }, 00:07:20.751 { 00:07:20.751 "params": { 00:07:20.751 "trtype": "pcie", 00:07:20.751 "traddr": "0000:00:11.0", 00:07:20.751 "name": "Nvme1" 00:07:20.751 }, 00:07:20.751 "method": "bdev_nvme_attach_controller" 00:07:20.751 }, 00:07:20.751 { 00:07:20.751 "method": "bdev_wait_for_examine" 00:07:20.751 } 00:07:20.751 ] 00:07:20.751 } 00:07:20.751 ] 00:07:20.751 } 00:07:21.046 [2024-07-25 13:50:09.770739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.046 [2024-07-25 13:50:09.887494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.046 [2024-07-25 13:50:09.940810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.563  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:21.563 00:07:21.563 ************************************ 00:07:21.563 END TEST dd_offset_magic 00:07:21.563 ************************************ 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:21.563 00:07:21.563 real 0m3.274s 00:07:21.563 user 0m2.423s 00:07:21.563 sys 0m0.926s 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:21.563 13:50:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:21.563 [2024-07-25 13:50:10.461283] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:21.564 [2024-07-25 13:50:10.462493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62962 ] 00:07:21.564 { 00:07:21.564 "subsystems": [ 00:07:21.564 { 00:07:21.564 "subsystem": "bdev", 00:07:21.564 "config": [ 00:07:21.564 { 00:07:21.564 "params": { 00:07:21.564 "trtype": "pcie", 00:07:21.564 "traddr": "0000:00:10.0", 00:07:21.564 "name": "Nvme0" 00:07:21.564 }, 00:07:21.564 "method": "bdev_nvme_attach_controller" 00:07:21.564 }, 00:07:21.564 { 00:07:21.564 "params": { 00:07:21.564 "trtype": "pcie", 00:07:21.564 "traddr": "0000:00:11.0", 00:07:21.564 "name": "Nvme1" 00:07:21.564 }, 00:07:21.564 "method": "bdev_nvme_attach_controller" 00:07:21.564 }, 00:07:21.564 { 00:07:21.564 "method": "bdev_wait_for_examine" 00:07:21.564 } 00:07:21.564 ] 00:07:21.564 } 00:07:21.564 ] 00:07:21.564 } 00:07:21.822 [2024-07-25 13:50:10.608116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.822 [2024-07-25 13:50:10.735489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.822 [2024-07-25 13:50:10.794738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.340  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:22.340 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:22.340 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:22.340 [2024-07-25 13:50:11.269008] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:22.340 [2024-07-25 13:50:11.269145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62983 ] 00:07:22.340 { 00:07:22.340 "subsystems": [ 00:07:22.340 { 00:07:22.340 "subsystem": "bdev", 00:07:22.340 "config": [ 00:07:22.340 { 00:07:22.340 "params": { 00:07:22.340 "trtype": "pcie", 00:07:22.340 "traddr": "0000:00:10.0", 00:07:22.340 "name": "Nvme0" 00:07:22.340 }, 00:07:22.340 "method": "bdev_nvme_attach_controller" 00:07:22.340 }, 00:07:22.340 { 00:07:22.340 "params": { 00:07:22.340 "trtype": "pcie", 00:07:22.340 "traddr": "0000:00:11.0", 00:07:22.340 "name": "Nvme1" 00:07:22.340 }, 00:07:22.340 "method": "bdev_nvme_attach_controller" 00:07:22.340 }, 00:07:22.340 { 00:07:22.340 "method": "bdev_wait_for_examine" 00:07:22.340 } 00:07:22.340 ] 00:07:22.340 } 00:07:22.340 ] 00:07:22.340 } 00:07:22.599 [2024-07-25 13:50:11.410508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.599 [2024-07-25 13:50:11.518788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.599 [2024-07-25 13:50:11.578070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.198  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:23.198 00:07:23.199 13:50:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:23.199 00:07:23.199 real 0m7.660s 00:07:23.199 user 0m5.707s 00:07:23.199 sys 0m3.402s 00:07:23.199 13:50:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.199 13:50:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:23.199 ************************************ 00:07:23.199 END TEST spdk_dd_bdev_to_bdev 00:07:23.199 ************************************ 00:07:23.199 13:50:12 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:23.199 13:50:12 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:23.199 13:50:12 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.199 13:50:12 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.199 13:50:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:23.199 ************************************ 00:07:23.199 START TEST spdk_dd_uring 00:07:23.199 ************************************ 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:23.199 * Looking for test storage... 00:07:23.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:23.199 ************************************ 00:07:23.199 START TEST dd_uring_copy 00:07:23.199 ************************************ 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=9ysmesaejlfxlg5qlzdhwpdkhdzg3anyzsjdzmclnjjjdx9b1xclpon940gsqgwseffpxswapmg2ouxdsbhsw24bmlk2uuobsuxl57qj5nhmrjw4sirza5mhy58c64pbqn2lo7346oropt7l58w5a4269pcm9w7k262abva9j0y7bmftkqe3o37dhkh7e69jrea1bwbj19sst9dza6q5kuinf7iqpsn259bxfikp0u7t21ig6xz3jtjz6f0psql9ri2emlmwuuwq2ta99eophu3d7rvqtdw7yp4xeqtiw1tt2nimbqfuxr4tg9afbp769efj3xzyyh91xinm1rbt9k1a8evxwner268yrpr5t5r35igd88j52o1u887uplhgpr59rv1evi8p1197uam51u9z6jxugwxh5ma8altq0mmakyj2apchzbxwzihrw9kynn5t3muuixf0844vp7b85zles624pkifdmyaat22xi0943344072xev5fmhqjf42gkjg5n3cxgj74zrjqj0z2txcawvgw5694ss4ik6mnp4fx293pk6n31ty98fft1oiklb2cnn0wrwh66jin8s1zm6f4fjmsj2tjz6fcc8q843v4vmbkx0zkb5txd194e4czoug8l377u3tozw9eys8lgkl0b04xb2m49ryctv990vshrrg3o1zua8iahr3u0dera1vqa577e1ln7ruosxd75at6al0r07m8wwhhfmslralvwvafgw8irzw9hhpb82jxkmuujajdchfijs80zvhn8u0lhqgjfrx69dw588w4cesc199mfdn39hlqy4xs7xgqnharip9ux6yszdiz798iofwhhwpggoisob8wd8efobjqdco112wqgvnpwnlow2l90o38mqmamx3kr2b558spwbgvqjg620dmr59b5froizwpd4fhuijjx4ho7xn2zrwvna52c5ig3roz92py3004ign7mscmuim2juyoh2c4fx2xr913g547zdo2bcpmoer 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 9ysmesaejlfxlg5qlzdhwpdkhdzg3anyzsjdzmclnjjjdx9b1xclpon940gsqgwseffpxswapmg2ouxdsbhsw24bmlk2uuobsuxl57qj5nhmrjw4sirza5mhy58c64pbqn2lo7346oropt7l58w5a4269pcm9w7k262abva9j0y7bmftkqe3o37dhkh7e69jrea1bwbj19sst9dza6q5kuinf7iqpsn259bxfikp0u7t21ig6xz3jtjz6f0psql9ri2emlmwuuwq2ta99eophu3d7rvqtdw7yp4xeqtiw1tt2nimbqfuxr4tg9afbp769efj3xzyyh91xinm1rbt9k1a8evxwner268yrpr5t5r35igd88j52o1u887uplhgpr59rv1evi8p1197uam51u9z6jxugwxh5ma8altq0mmakyj2apchzbxwzihrw9kynn5t3muuixf0844vp7b85zles624pkifdmyaat22xi0943344072xev5fmhqjf42gkjg5n3cxgj74zrjqj0z2txcawvgw5694ss4ik6mnp4fx293pk6n31ty98fft1oiklb2cnn0wrwh66jin8s1zm6f4fjmsj2tjz6fcc8q843v4vmbkx0zkb5txd194e4czoug8l377u3tozw9eys8lgkl0b04xb2m49ryctv990vshrrg3o1zua8iahr3u0dera1vqa577e1ln7ruosxd75at6al0r07m8wwhhfmslralvwvafgw8irzw9hhpb82jxkmuujajdchfijs80zvhn8u0lhqgjfrx69dw588w4cesc199mfdn39hlqy4xs7xgqnharip9ux6yszdiz798iofwhhwpggoisob8wd8efobjqdco112wqgvnpwnlow2l90o38mqmamx3kr2b558spwbgvqjg620dmr59b5froizwpd4fhuijjx4ho7xn2zrwvna52c5ig3roz92py3004ign7mscmuim2juyoh2c4fx2xr913g547zdo2bcpmoer 00:07:23.199 13:50:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:23.458 [2024-07-25 13:50:12.242068] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:23.458 [2024-07-25 13:50:12.242176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63053 ] 00:07:23.458 [2024-07-25 13:50:12.382692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.720 [2024-07-25 13:50:12.495050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.720 [2024-07-25 13:50:12.551203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.871  Copying: 511/511 [MB] (average 1091 MBps) 00:07:24.871 00:07:24.871 13:50:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:24.871 13:50:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:24.871 13:50:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:24.871 13:50:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 { 00:07:24.871 "subsystems": [ 00:07:24.871 { 00:07:24.871 "subsystem": "bdev", 00:07:24.871 "config": [ 00:07:24.871 { 00:07:24.871 "params": { 00:07:24.871 "block_size": 512, 00:07:24.871 "num_blocks": 1048576, 00:07:24.871 "name": "malloc0" 00:07:24.871 }, 00:07:24.871 "method": "bdev_malloc_create" 00:07:24.871 }, 00:07:24.871 { 00:07:24.871 "params": { 00:07:24.871 "filename": "/dev/zram1", 00:07:24.871 "name": "uring0" 00:07:24.871 }, 00:07:24.871 "method": "bdev_uring_create" 00:07:24.871 }, 00:07:24.871 { 00:07:24.871 "method": "bdev_wait_for_examine" 00:07:24.871 } 00:07:24.871 ] 00:07:24.871 } 00:07:24.871 ] 00:07:24.871 } 00:07:24.871 [2024-07-25 13:50:13.708224] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:24.871 [2024-07-25 13:50:13.708343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63069 ] 00:07:24.871 [2024-07-25 13:50:13.846249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.129 [2024-07-25 13:50:13.957368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.129 [2024-07-25 13:50:14.009774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.955  Copying: 220/512 [MB] (220 MBps) Copying: 440/512 [MB] (220 MBps) Copying: 512/512 [MB] (average 220 MBps) 00:07:27.955 00:07:27.955 13:50:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:27.955 13:50:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:27.955 13:50:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:27.955 13:50:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:27.955 [2024-07-25 13:50:16.983814] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:27.955 [2024-07-25 13:50:16.983915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63113 ] 00:07:27.955 { 00:07:27.955 "subsystems": [ 00:07:27.955 { 00:07:27.955 "subsystem": "bdev", 00:07:27.955 "config": [ 00:07:27.955 { 00:07:27.955 "params": { 00:07:27.955 "block_size": 512, 00:07:27.955 "num_blocks": 1048576, 00:07:27.955 "name": "malloc0" 00:07:27.955 }, 00:07:27.955 "method": "bdev_malloc_create" 00:07:27.955 }, 00:07:27.955 { 00:07:27.955 "params": { 00:07:27.955 "filename": "/dev/zram1", 00:07:27.955 "name": "uring0" 00:07:27.955 }, 00:07:27.955 "method": "bdev_uring_create" 00:07:27.955 }, 00:07:27.955 { 00:07:27.955 "method": "bdev_wait_for_examine" 00:07:27.955 } 00:07:27.955 ] 00:07:27.955 } 00:07:27.955 ] 00:07:27.955 } 00:07:28.213 [2024-07-25 13:50:17.120028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.213 [2024-07-25 13:50:17.240154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.471 [2024-07-25 13:50:17.296611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.971  Copying: 170/512 [MB] (170 MBps) Copying: 353/512 [MB] (182 MBps) Copying: 512/512 [MB] (average 172 MBps) 00:07:31.971 00:07:31.971 13:50:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:31.971 13:50:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 9ysmesaejlfxlg5qlzdhwpdkhdzg3anyzsjdzmclnjjjdx9b1xclpon940gsqgwseffpxswapmg2ouxdsbhsw24bmlk2uuobsuxl57qj5nhmrjw4sirza5mhy58c64pbqn2lo7346oropt7l58w5a4269pcm9w7k262abva9j0y7bmftkqe3o37dhkh7e69jrea1bwbj19sst9dza6q5kuinf7iqpsn259bxfikp0u7t21ig6xz3jtjz6f0psql9ri2emlmwuuwq2ta99eophu3d7rvqtdw7yp4xeqtiw1tt2nimbqfuxr4tg9afbp769efj3xzyyh91xinm1rbt9k1a8evxwner268yrpr5t5r35igd88j52o1u887uplhgpr59rv1evi8p1197uam51u9z6jxugwxh5ma8altq0mmakyj2apchzbxwzihrw9kynn5t3muuixf0844vp7b85zles624pkifdmyaat22xi0943344072xev5fmhqjf42gkjg5n3cxgj74zrjqj0z2txcawvgw5694ss4ik6mnp4fx293pk6n31ty98fft1oiklb2cnn0wrwh66jin8s1zm6f4fjmsj2tjz6fcc8q843v4vmbkx0zkb5txd194e4czoug8l377u3tozw9eys8lgkl0b04xb2m49ryctv990vshrrg3o1zua8iahr3u0dera1vqa577e1ln7ruosxd75at6al0r07m8wwhhfmslralvwvafgw8irzw9hhpb82jxkmuujajdchfijs80zvhn8u0lhqgjfrx69dw588w4cesc199mfdn39hlqy4xs7xgqnharip9ux6yszdiz798iofwhhwpggoisob8wd8efobjqdco112wqgvnpwnlow2l90o38mqmamx3kr2b558spwbgvqjg620dmr59b5froizwpd4fhuijjx4ho7xn2zrwvna52c5ig3roz92py3004ign7mscmuim2juyoh2c4fx2xr913g547zdo2bcpmoer == \9\y\s\m\e\s\a\e\j\l\f\x\l\g\5\q\l\z\d\h\w\p\d\k\h\d\z\g\3\a\n\y\z\s\j\d\z\m\c\l\n\j\j\j\d\x\9\b\1\x\c\l\p\o\n\9\4\0\g\s\q\g\w\s\e\f\f\p\x\s\w\a\p\m\g\2\o\u\x\d\s\b\h\s\w\2\4\b\m\l\k\2\u\u\o\b\s\u\x\l\5\7\q\j\5\n\h\m\r\j\w\4\s\i\r\z\a\5\m\h\y\5\8\c\6\4\p\b\q\n\2\l\o\7\3\4\6\o\r\o\p\t\7\l\5\8\w\5\a\4\2\6\9\p\c\m\9\w\7\k\2\6\2\a\b\v\a\9\j\0\y\7\b\m\f\t\k\q\e\3\o\3\7\d\h\k\h\7\e\6\9\j\r\e\a\1\b\w\b\j\1\9\s\s\t\9\d\z\a\6\q\5\k\u\i\n\f\7\i\q\p\s\n\2\5\9\b\x\f\i\k\p\0\u\7\t\2\1\i\g\6\x\z\3\j\t\j\z\6\f\0\p\s\q\l\9\r\i\2\e\m\l\m\w\u\u\w\q\2\t\a\9\9\e\o\p\h\u\3\d\7\r\v\q\t\d\w\7\y\p\4\x\e\q\t\i\w\1\t\t\2\n\i\m\b\q\f\u\x\r\4\t\g\9\a\f\b\p\7\6\9\e\f\j\3\x\z\y\y\h\9\1\x\i\n\m\1\r\b\t\9\k\1\a\8\e\v\x\w\n\e\r\2\6\8\y\r\p\r\5\t\5\r\3\5\i\g\d\8\8\j\5\2\o\1\u\8\8\7\u\p\l\h\g\p\r\5\9\r\v\1\e\v\i\8\p\1\1\9\7\u\a\m\5\1\u\9\z\6\j\x\u\g\w\x\h\5\m\a\8\a\l\t\q\0\m\m\a\k\y\j\2\a\p\c\h\z\b\x\w\z\i\h\r\w\9\k\y\n\n\5\t\3\m\u\u\i\x\f\0\8\4\4\v\p\7\b\8\5\z\l\e\s\6\2\4\p\k\i\f\d\m\y\a\a\t\2\2\x\i\0\9\4\3\3\4\4\0\7\2\x\e\v\5\f\m\h\q\j\f\4\2\g\k\j\g\5\n\3\c\x\g\j\7\4\z\r\j\q\j\0\z\2\t\x\c\a\w\v\g\w\5\6\9\4\s\s\4\i\k\6\m\n\p\4\f\x\2\9\3\p\k\6\n\3\1\t\y\9\8\f\f\t\1\o\i\k\l\b\2\c\n\n\0\w\r\w\h\6\6\j\i\n\8\s\1\z\m\6\f\4\f\j\m\s\j\2\t\j\z\6\f\c\c\8\q\8\4\3\v\4\v\m\b\k\x\0\z\k\b\5\t\x\d\1\9\4\e\4\c\z\o\u\g\8\l\3\7\7\u\3\t\o\z\w\9\e\y\s\8\l\g\k\l\0\b\0\4\x\b\2\m\4\9\r\y\c\t\v\9\9\0\v\s\h\r\r\g\3\o\1\z\u\a\8\i\a\h\r\3\u\0\d\e\r\a\1\v\q\a\5\7\7\e\1\l\n\7\r\u\o\s\x\d\7\5\a\t\6\a\l\0\r\0\7\m\8\w\w\h\h\f\m\s\l\r\a\l\v\w\v\a\f\g\w\8\i\r\z\w\9\h\h\p\b\8\2\j\x\k\m\u\u\j\a\j\d\c\h\f\i\j\s\8\0\z\v\h\n\8\u\0\l\h\q\g\j\f\r\x\6\9\d\w\5\8\8\w\4\c\e\s\c\1\9\9\m\f\d\n\3\9\h\l\q\y\4\x\s\7\x\g\q\n\h\a\r\i\p\9\u\x\6\y\s\z\d\i\z\7\9\8\i\o\f\w\h\h\w\p\g\g\o\i\s\o\b\8\w\d\8\e\f\o\b\j\q\d\c\o\1\1\2\w\q\g\v\n\p\w\n\l\o\w\2\l\9\0\o\3\8\m\q\m\a\m\x\3\k\r\2\b\5\5\8\s\p\w\b\g\v\q\j\g\6\2\0\d\m\r\5\9\b\5\f\r\o\i\z\w\p\d\4\f\h\u\i\j\j\x\4\h\o\7\x\n\2\z\r\w\v\n\a\5\2\c\5\i\g\3\r\o\z\9\2\p\y\3\0\0\4\i\g\n\7\m\s\c\m\u\i\m\2\j\u\y\o\h\2\c\4\f\x\2\x\r\9\1\3\g\5\4\7\z\d\o\2\b\c\p\m\o\e\r ]] 00:07:31.971 13:50:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:31.971 13:50:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 9ysmesaejlfxlg5qlzdhwpdkhdzg3anyzsjdzmclnjjjdx9b1xclpon940gsqgwseffpxswapmg2ouxdsbhsw24bmlk2uuobsuxl57qj5nhmrjw4sirza5mhy58c64pbqn2lo7346oropt7l58w5a4269pcm9w7k262abva9j0y7bmftkqe3o37dhkh7e69jrea1bwbj19sst9dza6q5kuinf7iqpsn259bxfikp0u7t21ig6xz3jtjz6f0psql9ri2emlmwuuwq2ta99eophu3d7rvqtdw7yp4xeqtiw1tt2nimbqfuxr4tg9afbp769efj3xzyyh91xinm1rbt9k1a8evxwner268yrpr5t5r35igd88j52o1u887uplhgpr59rv1evi8p1197uam51u9z6jxugwxh5ma8altq0mmakyj2apchzbxwzihrw9kynn5t3muuixf0844vp7b85zles624pkifdmyaat22xi0943344072xev5fmhqjf42gkjg5n3cxgj74zrjqj0z2txcawvgw5694ss4ik6mnp4fx293pk6n31ty98fft1oiklb2cnn0wrwh66jin8s1zm6f4fjmsj2tjz6fcc8q843v4vmbkx0zkb5txd194e4czoug8l377u3tozw9eys8lgkl0b04xb2m49ryctv990vshrrg3o1zua8iahr3u0dera1vqa577e1ln7ruosxd75at6al0r07m8wwhhfmslralvwvafgw8irzw9hhpb82jxkmuujajdchfijs80zvhn8u0lhqgjfrx69dw588w4cesc199mfdn39hlqy4xs7xgqnharip9ux6yszdiz798iofwhhwpggoisob8wd8efobjqdco112wqgvnpwnlow2l90o38mqmamx3kr2b558spwbgvqjg620dmr59b5froizwpd4fhuijjx4ho7xn2zrwvna52c5ig3roz92py3004ign7mscmuim2juyoh2c4fx2xr913g547zdo2bcpmoer == \9\y\s\m\e\s\a\e\j\l\f\x\l\g\5\q\l\z\d\h\w\p\d\k\h\d\z\g\3\a\n\y\z\s\j\d\z\m\c\l\n\j\j\j\d\x\9\b\1\x\c\l\p\o\n\9\4\0\g\s\q\g\w\s\e\f\f\p\x\s\w\a\p\m\g\2\o\u\x\d\s\b\h\s\w\2\4\b\m\l\k\2\u\u\o\b\s\u\x\l\5\7\q\j\5\n\h\m\r\j\w\4\s\i\r\z\a\5\m\h\y\5\8\c\6\4\p\b\q\n\2\l\o\7\3\4\6\o\r\o\p\t\7\l\5\8\w\5\a\4\2\6\9\p\c\m\9\w\7\k\2\6\2\a\b\v\a\9\j\0\y\7\b\m\f\t\k\q\e\3\o\3\7\d\h\k\h\7\e\6\9\j\r\e\a\1\b\w\b\j\1\9\s\s\t\9\d\z\a\6\q\5\k\u\i\n\f\7\i\q\p\s\n\2\5\9\b\x\f\i\k\p\0\u\7\t\2\1\i\g\6\x\z\3\j\t\j\z\6\f\0\p\s\q\l\9\r\i\2\e\m\l\m\w\u\u\w\q\2\t\a\9\9\e\o\p\h\u\3\d\7\r\v\q\t\d\w\7\y\p\4\x\e\q\t\i\w\1\t\t\2\n\i\m\b\q\f\u\x\r\4\t\g\9\a\f\b\p\7\6\9\e\f\j\3\x\z\y\y\h\9\1\x\i\n\m\1\r\b\t\9\k\1\a\8\e\v\x\w\n\e\r\2\6\8\y\r\p\r\5\t\5\r\3\5\i\g\d\8\8\j\5\2\o\1\u\8\8\7\u\p\l\h\g\p\r\5\9\r\v\1\e\v\i\8\p\1\1\9\7\u\a\m\5\1\u\9\z\6\j\x\u\g\w\x\h\5\m\a\8\a\l\t\q\0\m\m\a\k\y\j\2\a\p\c\h\z\b\x\w\z\i\h\r\w\9\k\y\n\n\5\t\3\m\u\u\i\x\f\0\8\4\4\v\p\7\b\8\5\z\l\e\s\6\2\4\p\k\i\f\d\m\y\a\a\t\2\2\x\i\0\9\4\3\3\4\4\0\7\2\x\e\v\5\f\m\h\q\j\f\4\2\g\k\j\g\5\n\3\c\x\g\j\7\4\z\r\j\q\j\0\z\2\t\x\c\a\w\v\g\w\5\6\9\4\s\s\4\i\k\6\m\n\p\4\f\x\2\9\3\p\k\6\n\3\1\t\y\9\8\f\f\t\1\o\i\k\l\b\2\c\n\n\0\w\r\w\h\6\6\j\i\n\8\s\1\z\m\6\f\4\f\j\m\s\j\2\t\j\z\6\f\c\c\8\q\8\4\3\v\4\v\m\b\k\x\0\z\k\b\5\t\x\d\1\9\4\e\4\c\z\o\u\g\8\l\3\7\7\u\3\t\o\z\w\9\e\y\s\8\l\g\k\l\0\b\0\4\x\b\2\m\4\9\r\y\c\t\v\9\9\0\v\s\h\r\r\g\3\o\1\z\u\a\8\i\a\h\r\3\u\0\d\e\r\a\1\v\q\a\5\7\7\e\1\l\n\7\r\u\o\s\x\d\7\5\a\t\6\a\l\0\r\0\7\m\8\w\w\h\h\f\m\s\l\r\a\l\v\w\v\a\f\g\w\8\i\r\z\w\9\h\h\p\b\8\2\j\x\k\m\u\u\j\a\j\d\c\h\f\i\j\s\8\0\z\v\h\n\8\u\0\l\h\q\g\j\f\r\x\6\9\d\w\5\8\8\w\4\c\e\s\c\1\9\9\m\f\d\n\3\9\h\l\q\y\4\x\s\7\x\g\q\n\h\a\r\i\p\9\u\x\6\y\s\z\d\i\z\7\9\8\i\o\f\w\h\h\w\p\g\g\o\i\s\o\b\8\w\d\8\e\f\o\b\j\q\d\c\o\1\1\2\w\q\g\v\n\p\w\n\l\o\w\2\l\9\0\o\3\8\m\q\m\a\m\x\3\k\r\2\b\5\5\8\s\p\w\b\g\v\q\j\g\6\2\0\d\m\r\5\9\b\5\f\r\o\i\z\w\p\d\4\f\h\u\i\j\j\x\4\h\o\7\x\n\2\z\r\w\v\n\a\5\2\c\5\i\g\3\r\o\z\9\2\p\y\3\0\0\4\i\g\n\7\m\s\c\m\u\i\m\2\j\u\y\o\h\2\c\4\f\x\2\x\r\9\1\3\g\5\4\7\z\d\o\2\b\c\p\m\o\e\r ]] 00:07:31.971 13:50:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:32.537 13:50:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:32.537 13:50:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:32.537 13:50:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:32.537 13:50:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.538 [2024-07-25 13:50:21.336161] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:32.538 [2024-07-25 13:50:21.336283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63186 ] 00:07:32.538 { 00:07:32.538 "subsystems": [ 00:07:32.538 { 00:07:32.538 "subsystem": "bdev", 00:07:32.538 "config": [ 00:07:32.538 { 00:07:32.538 "params": { 00:07:32.538 "block_size": 512, 00:07:32.538 "num_blocks": 1048576, 00:07:32.538 "name": "malloc0" 00:07:32.538 }, 00:07:32.538 "method": "bdev_malloc_create" 00:07:32.538 }, 00:07:32.538 { 00:07:32.538 "params": { 00:07:32.538 "filename": "/dev/zram1", 00:07:32.538 "name": "uring0" 00:07:32.538 }, 00:07:32.538 "method": "bdev_uring_create" 00:07:32.538 }, 00:07:32.538 { 00:07:32.538 "method": "bdev_wait_for_examine" 00:07:32.538 } 00:07:32.538 ] 00:07:32.538 } 00:07:32.538 ] 00:07:32.538 } 00:07:32.538 [2024-07-25 13:50:21.470111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.796 [2024-07-25 13:50:21.576960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.796 [2024-07-25 13:50:21.630935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.921  Copying: 147/512 [MB] (147 MBps) Copying: 292/512 [MB] (144 MBps) Copying: 443/512 [MB] (150 MBps) Copying: 512/512 [MB] (average 147 MBps) 00:07:36.921 00:07:36.921 13:50:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:36.921 13:50:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:36.921 13:50:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:36.921 13:50:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:36.921 13:50:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:36.921 13:50:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:36.921 13:50:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:36.921 13:50:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:36.921 [2024-07-25 13:50:25.760367] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:36.921 [2024-07-25 13:50:25.760486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63244 ] 00:07:36.921 { 00:07:36.921 "subsystems": [ 00:07:36.921 { 00:07:36.921 "subsystem": "bdev", 00:07:36.921 "config": [ 00:07:36.921 { 00:07:36.921 "params": { 00:07:36.921 "block_size": 512, 00:07:36.921 "num_blocks": 1048576, 00:07:36.921 "name": "malloc0" 00:07:36.921 }, 00:07:36.921 "method": "bdev_malloc_create" 00:07:36.921 }, 00:07:36.921 { 00:07:36.921 "params": { 00:07:36.921 "filename": "/dev/zram1", 00:07:36.921 "name": "uring0" 00:07:36.921 }, 00:07:36.921 "method": "bdev_uring_create" 00:07:36.921 }, 00:07:36.921 { 00:07:36.921 "params": { 00:07:36.921 "name": "uring0" 00:07:36.921 }, 00:07:36.921 "method": "bdev_uring_delete" 00:07:36.921 }, 00:07:36.921 { 00:07:36.921 "method": "bdev_wait_for_examine" 00:07:36.921 } 00:07:36.921 ] 00:07:36.921 } 00:07:36.921 ] 00:07:36.921 } 00:07:36.921 [2024-07-25 13:50:25.898275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.180 [2024-07-25 13:50:26.012162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.180 [2024-07-25 13:50:26.067626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.700  Copying: 0/0 [B] (average 0 Bps) 00:07:37.700 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.700 13:50:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:37.958 [2024-07-25 13:50:26.765546] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:37.958 [2024-07-25 13:50:26.765685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63273 ] 00:07:37.958 { 00:07:37.958 "subsystems": [ 00:07:37.958 { 00:07:37.958 "subsystem": "bdev", 00:07:37.958 "config": [ 00:07:37.958 { 00:07:37.958 "params": { 00:07:37.958 "block_size": 512, 00:07:37.958 "num_blocks": 1048576, 00:07:37.958 "name": "malloc0" 00:07:37.958 }, 00:07:37.958 "method": "bdev_malloc_create" 00:07:37.958 }, 00:07:37.958 { 00:07:37.958 "params": { 00:07:37.958 "filename": "/dev/zram1", 00:07:37.958 "name": "uring0" 00:07:37.958 }, 00:07:37.958 "method": "bdev_uring_create" 00:07:37.958 }, 00:07:37.958 { 00:07:37.958 "params": { 00:07:37.958 "name": "uring0" 00:07:37.958 }, 00:07:37.958 "method": "bdev_uring_delete" 00:07:37.958 }, 00:07:37.958 { 00:07:37.958 "method": "bdev_wait_for_examine" 00:07:37.958 } 00:07:37.958 ] 00:07:37.958 } 00:07:37.958 ] 00:07:37.958 } 00:07:37.958 [2024-07-25 13:50:26.905854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.216 [2024-07-25 13:50:27.004317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.216 [2024-07-25 13:50:27.058295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.475 [2024-07-25 13:50:27.262642] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:38.475 [2024-07-25 13:50:27.262718] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:38.475 [2024-07-25 13:50:27.262731] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:38.475 [2024-07-25 13:50:27.262742] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.733 [2024-07-25 13:50:27.578379] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:38.733 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:38.991 00:07:38.991 real 0m15.785s 00:07:38.991 user 0m10.870s 00:07:38.991 sys 0m12.642s 00:07:38.991 ************************************ 00:07:38.991 END TEST dd_uring_copy 00:07:38.991 ************************************ 00:07:38.991 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.992 13:50:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:38.992 00:07:38.992 real 0m15.918s 00:07:38.992 user 0m10.928s 00:07:38.992 sys 0m12.722s 00:07:38.992 13:50:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.992 13:50:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:38.992 ************************************ 00:07:38.992 END TEST spdk_dd_uring 00:07:38.992 ************************************ 00:07:38.992 13:50:28 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:38.992 13:50:28 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.992 13:50:28 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.992 13:50:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:39.251 ************************************ 00:07:39.251 START TEST spdk_dd_sparse 00:07:39.251 ************************************ 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:39.251 * Looking for test storage... 00:07:39.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:39.251 1+0 records in 00:07:39.251 1+0 records out 00:07:39.251 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00421313 s, 996 MB/s 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:39.251 1+0 records in 00:07:39.251 1+0 records out 00:07:39.251 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00501671 s, 836 MB/s 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:39.251 1+0 records in 00:07:39.251 1+0 records out 00:07:39.251 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00528279 s, 794 MB/s 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:39.251 ************************************ 00:07:39.251 START TEST dd_sparse_file_to_file 00:07:39.251 ************************************ 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:39.251 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:39.251 [2024-07-25 13:50:28.205192] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:39.251 [2024-07-25 13:50:28.205295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63364 ] 00:07:39.251 { 00:07:39.251 "subsystems": [ 00:07:39.251 { 00:07:39.251 "subsystem": "bdev", 00:07:39.251 "config": [ 00:07:39.251 { 00:07:39.251 "params": { 00:07:39.251 "block_size": 4096, 00:07:39.251 "filename": "dd_sparse_aio_disk", 00:07:39.251 "name": "dd_aio" 00:07:39.251 }, 00:07:39.251 "method": "bdev_aio_create" 00:07:39.251 }, 00:07:39.251 { 00:07:39.251 "params": { 00:07:39.251 "lvs_name": "dd_lvstore", 00:07:39.251 "bdev_name": "dd_aio" 00:07:39.251 }, 00:07:39.251 "method": "bdev_lvol_create_lvstore" 00:07:39.251 }, 00:07:39.251 { 00:07:39.251 "method": "bdev_wait_for_examine" 00:07:39.251 } 00:07:39.251 ] 00:07:39.251 } 00:07:39.251 ] 00:07:39.251 } 00:07:39.510 [2024-07-25 13:50:28.343263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.510 [2024-07-25 13:50:28.477782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.769 [2024-07-25 13:50:28.542606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.028  Copying: 12/36 [MB] (average 923 MBps) 00:07:40.028 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:40.028 00:07:40.028 real 0m0.803s 00:07:40.028 user 0m0.507s 00:07:40.028 sys 0m0.385s 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:40.028 ************************************ 00:07:40.028 END TEST dd_sparse_file_to_file 00:07:40.028 ************************************ 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:40.028 ************************************ 00:07:40.028 START TEST dd_sparse_file_to_bdev 00:07:40.028 ************************************ 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:40.028 13:50:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:40.028 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:40.028 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:40.028 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:40.028 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:40.028 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:40.028 [2024-07-25 13:50:29.056982] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:40.028 [2024-07-25 13:50:29.057095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63407 ] 00:07:40.286 { 00:07:40.287 "subsystems": [ 00:07:40.287 { 00:07:40.287 "subsystem": "bdev", 00:07:40.287 "config": [ 00:07:40.287 { 00:07:40.287 "params": { 00:07:40.287 "block_size": 4096, 00:07:40.287 "filename": "dd_sparse_aio_disk", 00:07:40.287 "name": "dd_aio" 00:07:40.287 }, 00:07:40.287 "method": "bdev_aio_create" 00:07:40.287 }, 00:07:40.287 { 00:07:40.287 "params": { 00:07:40.287 "lvs_name": "dd_lvstore", 00:07:40.287 "lvol_name": "dd_lvol", 00:07:40.287 "size_in_mib": 36, 00:07:40.287 "thin_provision": true 00:07:40.287 }, 00:07:40.287 "method": "bdev_lvol_create" 00:07:40.287 }, 00:07:40.287 { 00:07:40.287 "method": "bdev_wait_for_examine" 00:07:40.287 } 00:07:40.287 ] 00:07:40.287 } 00:07:40.287 ] 00:07:40.287 } 00:07:40.287 [2024-07-25 13:50:29.193183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.287 [2024-07-25 13:50:29.313932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.546 [2024-07-25 13:50:29.370040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.892  Copying: 12/36 [MB] (average 461 MBps) 00:07:40.892 00:07:40.892 00:07:40.892 real 0m0.738s 00:07:40.892 user 0m0.470s 00:07:40.892 sys 0m0.378s 00:07:40.892 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.892 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:40.892 ************************************ 00:07:40.892 END TEST dd_sparse_file_to_bdev 00:07:40.892 ************************************ 00:07:40.892 13:50:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:40.892 13:50:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.892 13:50:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.892 13:50:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 ************************************ 00:07:40.893 START TEST dd_sparse_bdev_to_file 00:07:40.893 ************************************ 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:40.893 13:50:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:40.893 [2024-07-25 13:50:29.844155] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:40.893 [2024-07-25 13:50:29.844250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63445 ] 00:07:40.893 { 00:07:40.893 "subsystems": [ 00:07:40.893 { 00:07:40.893 "subsystem": "bdev", 00:07:40.893 "config": [ 00:07:40.893 { 00:07:40.893 "params": { 00:07:40.893 "block_size": 4096, 00:07:40.893 "filename": "dd_sparse_aio_disk", 00:07:40.893 "name": "dd_aio" 00:07:40.893 }, 00:07:40.893 "method": "bdev_aio_create" 00:07:40.893 }, 00:07:40.893 { 00:07:40.893 "method": "bdev_wait_for_examine" 00:07:40.893 } 00:07:40.893 ] 00:07:40.893 } 00:07:40.893 ] 00:07:40.893 } 00:07:41.152 [2024-07-25 13:50:29.983359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.152 [2024-07-25 13:50:30.100366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.152 [2024-07-25 13:50:30.155729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.669  Copying: 12/36 [MB] (average 923 MBps) 00:07:41.669 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:41.669 00:07:41.669 real 0m0.737s 00:07:41.669 user 0m0.469s 00:07:41.669 sys 0m0.379s 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:41.669 ************************************ 00:07:41.669 END TEST dd_sparse_bdev_to_file 00:07:41.669 ************************************ 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:41.669 00:07:41.669 real 0m2.568s 00:07:41.669 user 0m1.547s 00:07:41.669 sys 0m1.319s 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.669 ************************************ 00:07:41.669 END TEST spdk_dd_sparse 00:07:41.669 ************************************ 00:07:41.669 13:50:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:41.669 13:50:30 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:41.669 13:50:30 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.669 13:50:30 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.669 13:50:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:41.669 ************************************ 00:07:41.669 START TEST spdk_dd_negative 00:07:41.669 ************************************ 00:07:41.669 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:41.928 * Looking for test storage... 00:07:41.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:41.928 ************************************ 00:07:41.928 START TEST dd_invalid_arguments 00:07:41.928 ************************************ 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:41.928 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:41.928 00:07:41.928 CPU options: 00:07:41.928 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:41.928 (like [0,1,10]) 00:07:41.928 --lcores lcore to CPU mapping list. The list is in the format: 00:07:41.928 [<,lcores[@CPUs]>...] 00:07:41.928 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:41.928 Within the group, '-' is used for range separator, 00:07:41.928 ',' is used for single number separator. 00:07:41.928 '( )' can be omitted for single element group, 00:07:41.928 '@' can be omitted if cpus and lcores have the same value 00:07:41.928 --disable-cpumask-locks Disable CPU core lock files. 00:07:41.928 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:41.928 pollers in the app support interrupt mode) 00:07:41.928 -p, --main-core main (primary) core for DPDK 00:07:41.928 00:07:41.928 Configuration options: 00:07:41.928 -c, --config, --json JSON config file 00:07:41.928 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:41.928 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:41.928 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:41.928 --rpcs-allowed comma-separated list of permitted RPCS 00:07:41.928 --json-ignore-init-errors don't exit on invalid config entry 00:07:41.928 00:07:41.928 Memory options: 00:07:41.928 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:41.928 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:41.928 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:41.928 -R, --huge-unlink unlink huge files after initialization 00:07:41.928 -n, --mem-channels number of memory channels used for DPDK 00:07:41.928 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:41.928 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:41.928 --no-huge run without using hugepages 00:07:41.928 -i, --shm-id shared memory ID (optional) 00:07:41.928 -g, --single-file-segments force creating just one hugetlbfs file 00:07:41.928 00:07:41.928 PCI options: 00:07:41.928 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:41.928 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:41.928 -u, --no-pci disable PCI access 00:07:41.928 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:41.928 00:07:41.928 Log options: 00:07:41.928 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:41.928 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:41.928 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:41.928 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:41.928 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:41.928 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:41.928 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:41.928 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:41.928 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:41.928 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:41.928 virtio_vfio_user, vmd) 00:07:41.928 --silence-noticelog disable notice level logging to stderr 00:07:41.928 00:07:41.928 Trace options: 00:07:41.928 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:41.928 setting 0 to disable trace (default 32768) 00:07:41.928 Tracepoints vary in size and can use more than one trace entry. 00:07:41.928 -e, --tpoint-group [:] 00:07:41.928 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:41.928 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:41.928 [2024-07-25 13:50:30.805945] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:41.928 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:41.928 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:41.928 a tracepoint group. First tpoint inside a group can be enabled by 00:07:41.928 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:41.928 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:41.928 in /include/spdk_internal/trace_defs.h 00:07:41.928 00:07:41.928 Other options: 00:07:41.928 -h, --help show this usage 00:07:41.928 -v, --version print SPDK version 00:07:41.928 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:41.928 --env-context Opaque context for use of the env implementation 00:07:41.928 00:07:41.928 Application specific: 00:07:41.928 [--------- DD Options ---------] 00:07:41.928 --if Input file. Must specify either --if or --ib. 00:07:41.928 --ib Input bdev. Must specifier either --if or --ib 00:07:41.928 --of Output file. Must specify either --of or --ob. 00:07:41.928 --ob Output bdev. Must specify either --of or --ob. 00:07:41.928 --iflag Input file flags. 00:07:41.928 --oflag Output file flags. 00:07:41.928 --bs I/O unit size (default: 4096) 00:07:41.928 --qd Queue depth (default: 2) 00:07:41.928 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:41.928 --skip Skip this many I/O units at start of input. (default: 0) 00:07:41.928 --seek Skip this many I/O units at start of output. (default: 0) 00:07:41.928 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:41.928 --sparse Enable hole skipping in input target 00:07:41.928 Available iflag and oflag values: 00:07:41.928 append - append mode 00:07:41.928 direct - use direct I/O for data 00:07:41.928 directory - fail unless a directory 00:07:41.928 dsync - use synchronized I/O for data 00:07:41.928 noatime - do not update access time 00:07:41.928 noctty - do not assign controlling terminal from file 00:07:41.928 nofollow - do not follow symlinks 00:07:41.928 nonblock - use non-blocking I/O 00:07:41.928 sync - use synchronized I/O for data and metadata 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.928 00:07:41.928 real 0m0.074s 00:07:41.928 user 0m0.043s 00:07:41.928 sys 0m0.030s 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:41.928 ************************************ 00:07:41.928 END TEST dd_invalid_arguments 00:07:41.928 ************************************ 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:41.928 ************************************ 00:07:41.928 START TEST dd_double_input 00:07:41.928 ************************************ 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:41.928 [2024-07-25 13:50:30.934364] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.928 00:07:41.928 real 0m0.077s 00:07:41.928 user 0m0.043s 00:07:41.928 sys 0m0.033s 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.928 13:50:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:41.928 ************************************ 00:07:41.928 END TEST dd_double_input 00:07:41.928 ************************************ 00:07:42.187 13:50:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:42.187 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.187 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.187 13:50:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:42.187 ************************************ 00:07:42.187 START TEST dd_double_output 00:07:42.187 ************************************ 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.187 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:42.187 [2024-07-25 13:50:31.058449] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.188 00:07:42.188 real 0m0.068s 00:07:42.188 user 0m0.041s 00:07:42.188 sys 0m0.026s 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.188 ************************************ 00:07:42.188 END TEST dd_double_output 00:07:42.188 ************************************ 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:42.188 ************************************ 00:07:42.188 START TEST dd_no_input 00:07:42.188 ************************************ 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:42.188 [2024-07-25 13:50:31.176987] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:42.188 ************************************ 00:07:42.188 END TEST dd_no_input 00:07:42.188 ************************************ 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.188 00:07:42.188 real 0m0.067s 00:07:42.188 user 0m0.039s 00:07:42.188 sys 0m0.028s 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.188 13:50:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:42.447 ************************************ 00:07:42.447 START TEST dd_no_output 00:07:42.447 ************************************ 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.447 [2024-07-25 13:50:31.307727] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.447 00:07:42.447 real 0m0.078s 00:07:42.447 user 0m0.050s 00:07:42.447 sys 0m0.027s 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:42.447 ************************************ 00:07:42.447 END TEST dd_no_output 00:07:42.447 ************************************ 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:42.447 ************************************ 00:07:42.447 START TEST dd_wrong_blocksize 00:07:42.447 ************************************ 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:42.447 [2024-07-25 13:50:31.426652] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.447 00:07:42.447 real 0m0.071s 00:07:42.447 user 0m0.045s 00:07:42.447 sys 0m0.026s 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.447 ************************************ 00:07:42.447 END TEST dd_wrong_blocksize 00:07:42.447 ************************************ 00:07:42.447 13:50:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:42.705 13:50:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:42.705 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:42.706 ************************************ 00:07:42.706 START TEST dd_smaller_blocksize 00:07:42.706 ************************************ 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.706 13:50:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:42.706 [2024-07-25 13:50:31.545817] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:42.706 [2024-07-25 13:50:31.545924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63669 ] 00:07:42.706 [2024-07-25 13:50:31.680954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.964 [2024-07-25 13:50:31.811415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.964 [2024-07-25 13:50:31.869701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.222 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:43.222 [2024-07-25 13:50:32.202479] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:43.222 [2024-07-25 13:50:32.202563] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.481 [2024-07-25 13:50:32.328170] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:43.481 13:50:32 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:43.481 13:50:32 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.481 13:50:32 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:43.481 13:50:32 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:43.481 13:50:32 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:43.481 13:50:32 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.481 00:07:43.481 real 0m0.958s 00:07:43.481 user 0m0.460s 00:07:43.482 sys 0m0.390s 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:43.482 ************************************ 00:07:43.482 END TEST dd_smaller_blocksize 00:07:43.482 ************************************ 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.482 ************************************ 00:07:43.482 START TEST dd_invalid_count 00:07:43.482 ************************************ 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.482 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:43.743 [2024-07-25 13:50:32.546217] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.743 00:07:43.743 real 0m0.060s 00:07:43.743 user 0m0.034s 00:07:43.743 sys 0m0.026s 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:43.743 ************************************ 00:07:43.743 END TEST dd_invalid_count 00:07:43.743 ************************************ 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.743 ************************************ 00:07:43.743 START TEST dd_invalid_oflag 00:07:43.743 ************************************ 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:43.743 [2024-07-25 13:50:32.661497] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.743 00:07:43.743 real 0m0.074s 00:07:43.743 user 0m0.047s 00:07:43.743 sys 0m0.026s 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.743 ************************************ 00:07:43.743 END TEST dd_invalid_oflag 00:07:43.743 ************************************ 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.743 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.743 ************************************ 00:07:43.744 START TEST dd_invalid_iflag 00:07:43.744 ************************************ 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.744 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:44.003 [2024-07-25 13:50:32.808490] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.003 00:07:44.003 real 0m0.096s 00:07:44.003 user 0m0.062s 00:07:44.003 sys 0m0.032s 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:44.003 ************************************ 00:07:44.003 END TEST dd_invalid_iflag 00:07:44.003 ************************************ 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.003 ************************************ 00:07:44.003 START TEST dd_unknown_flag 00:07:44.003 ************************************ 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:44.003 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.004 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.004 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.004 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.004 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.004 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.004 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.004 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.004 13:50:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:44.004 [2024-07-25 13:50:32.932882] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:44.004 [2024-07-25 13:50:32.932971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63761 ] 00:07:44.263 [2024-07-25 13:50:33.073715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.263 [2024-07-25 13:50:33.189914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.263 [2024-07-25 13:50:33.250009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.263 [2024-07-25 13:50:33.286849] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:44.263 [2024-07-25 13:50:33.286923] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.263 [2024-07-25 13:50:33.287006] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:44.263 [2024-07-25 13:50:33.287024] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.263 [2024-07-25 13:50:33.287340] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:44.263 [2024-07-25 13:50:33.287376] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.263 [2024-07-25 13:50:33.287430] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:44.263 [2024-07-25 13:50:33.287465] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:44.539 [2024-07-25 13:50:33.407676] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:44.539 13:50:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:44.539 13:50:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.539 13:50:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:44.539 13:50:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:44.539 13:50:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:44.539 13:50:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.539 00:07:44.539 real 0m0.653s 00:07:44.539 user 0m0.385s 00:07:44.539 sys 0m0.176s 00:07:44.539 13:50:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.539 13:50:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:44.539 ************************************ 00:07:44.539 END TEST dd_unknown_flag 00:07:44.539 ************************************ 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.803 ************************************ 00:07:44.803 START TEST dd_invalid_json 00:07:44.803 ************************************ 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.803 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:44.803 [2024-07-25 13:50:33.643985] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:44.803 [2024-07-25 13:50:33.644090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63795 ] 00:07:44.803 [2024-07-25 13:50:33.782226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.061 [2024-07-25 13:50:33.870309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.061 [2024-07-25 13:50:33.870454] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:45.061 [2024-07-25 13:50:33.870470] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:45.061 [2024-07-25 13:50:33.870480] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.061 [2024-07-25 13:50:33.870530] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.061 00:07:45.061 real 0m0.375s 00:07:45.061 user 0m0.194s 00:07:45.061 sys 0m0.078s 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:45.061 ************************************ 00:07:45.061 END TEST dd_invalid_json 00:07:45.061 ************************************ 00:07:45.061 ************************************ 00:07:45.061 END TEST spdk_dd_negative 00:07:45.061 ************************************ 00:07:45.061 00:07:45.061 real 0m3.355s 00:07:45.061 user 0m1.680s 00:07:45.061 sys 0m1.328s 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.061 13:50:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.061 00:07:45.061 real 1m20.107s 00:07:45.061 user 0m52.908s 00:07:45.061 sys 0m33.609s 00:07:45.061 13:50:34 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.061 13:50:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:45.061 ************************************ 00:07:45.061 END TEST spdk_dd 00:07:45.061 ************************************ 00:07:45.061 13:50:34 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:45.061 13:50:34 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:45.061 13:50:34 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:45.061 13:50:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.061 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:07:45.321 13:50:34 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:45.321 13:50:34 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:45.321 13:50:34 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:45.321 13:50:34 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:45.321 13:50:34 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:45.321 13:50:34 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:45.321 13:50:34 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:45.321 13:50:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.321 13:50:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.322 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:07:45.322 ************************************ 00:07:45.322 START TEST nvmf_tcp 00:07:45.322 ************************************ 00:07:45.322 13:50:34 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:45.322 * Looking for test storage... 00:07:45.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:45.322 13:50:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:45.322 13:50:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:45.322 13:50:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:45.322 13:50:34 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.322 13:50:34 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.322 13:50:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.322 ************************************ 00:07:45.322 START TEST nvmf_target_core 00:07:45.322 ************************************ 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:45.322 * Looking for test storage... 00:07:45.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.322 ************************************ 00:07:45.322 START TEST nvmf_host_management 00:07:45.322 ************************************ 00:07:45.322 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:45.582 * Looking for test storage... 00:07:45.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.582 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:45.583 Cannot find device "nvmf_init_br" 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:45.583 Cannot find device "nvmf_tgt_br" 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.583 Cannot find device "nvmf_tgt_br2" 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:45.583 Cannot find device "nvmf_init_br" 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:45.583 Cannot find device "nvmf_tgt_br" 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:45.583 Cannot find device "nvmf_tgt_br2" 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:45.583 Cannot find device "nvmf_br" 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:45.583 Cannot find device "nvmf_init_if" 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:45.583 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:45.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:07:45.842 00:07:45.842 --- 10.0.0.2 ping statistics --- 00:07:45.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.842 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:45.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:45.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:07:45.842 00:07:45.842 --- 10.0.0.3 ping statistics --- 00:07:45.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.842 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:45.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:45.842 00:07:45.842 --- 10.0.0.1 ping statistics --- 00:07:45.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.842 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64071 00:07:45.842 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64071 00:07:45.843 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:45.843 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64071 ']' 00:07:45.843 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.843 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.843 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.843 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.843 13:50:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.102 [2024-07-25 13:50:34.900125] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:46.102 [2024-07-25 13:50:34.900231] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.102 [2024-07-25 13:50:35.042978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.361 [2024-07-25 13:50:35.164518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.361 [2024-07-25 13:50:35.164845] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.361 [2024-07-25 13:50:35.165005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.361 [2024-07-25 13:50:35.165154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.361 [2024-07-25 13:50:35.165196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.361 [2024-07-25 13:50:35.165536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.361 [2024-07-25 13:50:35.165671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.361 [2024-07-25 13:50:35.165889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.361 [2024-07-25 13:50:35.165894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.361 [2024-07-25 13:50:35.222923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.978 [2024-07-25 13:50:35.928154] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.978 13:50:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.978 Malloc0 00:07:46.978 [2024-07-25 13:50:36.006940] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64128 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64128 /var/tmp/bdevperf.sock 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64128 ']' 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:47.238 { 00:07:47.238 "params": { 00:07:47.238 "name": "Nvme$subsystem", 00:07:47.238 "trtype": "$TEST_TRANSPORT", 00:07:47.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.238 "adrfam": "ipv4", 00:07:47.238 "trsvcid": "$NVMF_PORT", 00:07:47.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.238 "hdgst": ${hdgst:-false}, 00:07:47.238 "ddgst": ${ddgst:-false} 00:07:47.238 }, 00:07:47.238 "method": "bdev_nvme_attach_controller" 00:07:47.238 } 00:07:47.238 EOF 00:07:47.238 )") 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:47.238 13:50:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:47.238 "params": { 00:07:47.238 "name": "Nvme0", 00:07:47.238 "trtype": "tcp", 00:07:47.238 "traddr": "10.0.0.2", 00:07:47.238 "adrfam": "ipv4", 00:07:47.238 "trsvcid": "4420", 00:07:47.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.238 "hdgst": false, 00:07:47.238 "ddgst": false 00:07:47.238 }, 00:07:47.238 "method": "bdev_nvme_attach_controller" 00:07:47.238 }' 00:07:47.238 [2024-07-25 13:50:36.106050] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:47.238 [2024-07-25 13:50:36.106122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64128 ] 00:07:47.238 [2024-07-25 13:50:36.237228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.498 [2024-07-25 13:50:36.373534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.498 [2024-07-25 13:50:36.439416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:47.756 Running I/O for 10 seconds... 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:48.325 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.326 13:50:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:48.326 [2024-07-25 13:50:37.199056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.326 [2024-07-25 13:50:37.199544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.326 [2024-07-25 13:50:37.199553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.199986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.199995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.200006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.200015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.200026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.200036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.200047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.200057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.200068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.200077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.327 [2024-07-25 13:50:37.200088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.327 [2024-07-25 13:50:37.200097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.328 [2024-07-25 13:50:37.200487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8eec0 is same with the state(5) to be set 00:07:48.328 [2024-07-25 13:50:37.200568] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc8eec0 was disconnected and freed. reset controller. 00:07:48.328 [2024-07-25 13:50:37.200683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.328 [2024-07-25 13:50:37.200700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.328 [2024-07-25 13:50:37.200739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.328 [2024-07-25 13:50:37.200760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.328 [2024-07-25 13:50:37.200780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.328 [2024-07-25 13:50:37.200789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86d50 is same with the state(5) to be set 00:07:48.328 [2024-07-25 13:50:37.201869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:48.328 task offset: 122880 on job bdev=Nvme0n1 fails 00:07:48.328 00:07:48.328 Latency(us) 00:07:48.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.328 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.328 Job: Nvme0n1 ended in about 0.65 seconds with error 00:07:48.328 Verification LBA range: start 0x0 length 0x400 00:07:48.328 Nvme0n1 : 0.65 1478.09 92.38 98.54 0.00 39559.73 2159.71 38368.35 00:07:48.328 =================================================================================================================== 00:07:48.328 Total : 1478.09 92.38 98.54 0.00 39559.73 2159.71 38368.35 00:07:48.328 [2024-07-25 13:50:37.204023] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.328 [2024-07-25 13:50:37.204049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc86d50 (9): Bad file descriptor 00:07:48.328 [2024-07-25 13:50:37.209264] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64128 00:07:49.265 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64128) - No such process 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:49.265 { 00:07:49.265 "params": { 00:07:49.265 "name": "Nvme$subsystem", 00:07:49.265 "trtype": "$TEST_TRANSPORT", 00:07:49.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.265 "adrfam": "ipv4", 00:07:49.265 "trsvcid": "$NVMF_PORT", 00:07:49.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.265 "hdgst": ${hdgst:-false}, 00:07:49.265 "ddgst": ${ddgst:-false} 00:07:49.265 }, 00:07:49.265 "method": "bdev_nvme_attach_controller" 00:07:49.265 } 00:07:49.265 EOF 00:07:49.265 )") 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:49.265 13:50:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:49.265 "params": { 00:07:49.265 "name": "Nvme0", 00:07:49.265 "trtype": "tcp", 00:07:49.265 "traddr": "10.0.0.2", 00:07:49.265 "adrfam": "ipv4", 00:07:49.265 "trsvcid": "4420", 00:07:49.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:49.265 "hdgst": false, 00:07:49.265 "ddgst": false 00:07:49.265 }, 00:07:49.265 "method": "bdev_nvme_attach_controller" 00:07:49.265 }' 00:07:49.265 [2024-07-25 13:50:38.257907] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:49.265 [2024-07-25 13:50:38.257993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64166 ] 00:07:49.524 [2024-07-25 13:50:38.399998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.524 [2024-07-25 13:50:38.523929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.783 [2024-07-25 13:50:38.587758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.783 Running I/O for 1 seconds... 00:07:50.718 00:07:50.718 Latency(us) 00:07:50.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.718 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:50.718 Verification LBA range: start 0x0 length 0x400 00:07:50.718 Nvme0n1 : 1.04 1545.33 96.58 0.00 0.00 40606.86 4230.05 38130.04 00:07:50.718 =================================================================================================================== 00:07:50.718 Total : 1545.33 96.58 0.00 0.00 40606.86 4230.05 38130.04 00:07:50.975 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:50.975 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:50.975 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:50.975 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:50.975 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:50.976 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.976 13:50:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:51.233 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.233 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:51.233 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.234 rmmod nvme_tcp 00:07:51.234 rmmod nvme_fabrics 00:07:51.234 rmmod nvme_keyring 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64071 ']' 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64071 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64071 ']' 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64071 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64071 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:51.234 killing process with pid 64071 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64071' 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64071 00:07:51.234 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64071 00:07:51.492 [2024-07-25 13:50:40.345689] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:51.492 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:51.493 00:07:51.493 real 0m6.072s 00:07:51.493 user 0m23.530s 00:07:51.493 sys 0m1.555s 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.493 ************************************ 00:07:51.493 END TEST nvmf_host_management 00:07:51.493 ************************************ 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.493 ************************************ 00:07:51.493 START TEST nvmf_lvol 00:07:51.493 ************************************ 00:07:51.493 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:51.752 * Looking for test storage... 00:07:51.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:51.752 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:51.753 Cannot find device "nvmf_tgt_br" 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.753 Cannot find device "nvmf_tgt_br2" 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:51.753 Cannot find device "nvmf_tgt_br" 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:51.753 Cannot find device "nvmf_tgt_br2" 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:51.753 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:52.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:52.012 00:07:52.012 --- 10.0.0.2 ping statistics --- 00:07:52.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.012 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:52.012 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:52.012 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:07:52.012 00:07:52.012 --- 10.0.0.3 ping statistics --- 00:07:52.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.012 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:52.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:52.012 00:07:52.012 --- 10.0.0.1 ping statistics --- 00:07:52.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.012 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64387 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64387 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 64387 ']' 00:07:52.012 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.013 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.013 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.013 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.013 13:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.013 [2024-07-25 13:50:40.983221] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:07:52.013 [2024-07-25 13:50:40.983343] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.271 [2024-07-25 13:50:41.125426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.271 [2024-07-25 13:50:41.264848] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.271 [2024-07-25 13:50:41.265151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.271 [2024-07-25 13:50:41.265341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.271 [2024-07-25 13:50:41.265680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.271 [2024-07-25 13:50:41.265931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.271 [2024-07-25 13:50:41.266282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.271 [2024-07-25 13:50:41.266444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.271 [2024-07-25 13:50:41.266452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.530 [2024-07-25 13:50:41.328444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.096 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.096 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:53.096 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.096 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:53.096 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.096 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.096 13:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.354 [2024-07-25 13:50:42.238750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.354 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.614 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:53.614 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.873 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:53.873 13:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:54.131 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:54.697 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8b0d41bd-12a4-4d9b-bdfb-e91bf035e825 00:07:54.697 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b0d41bd-12a4-4d9b-bdfb-e91bf035e825 lvol 20 00:07:54.697 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=368e34f4-078e-483b-b22e-0f67f707b584 00:07:54.697 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.956 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 368e34f4-078e-483b-b22e-0f67f707b584 00:07:55.213 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.471 [2024-07-25 13:50:44.465314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.471 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.729 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64463 00:07:55.729 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:55.729 13:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:57.106 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 368e34f4-078e-483b-b22e-0f67f707b584 MY_SNAPSHOT 00:07:57.106 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=21648b3b-860f-4a97-8edc-9e6da677cc0e 00:07:57.106 13:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 368e34f4-078e-483b-b22e-0f67f707b584 30 00:07:57.364 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 21648b3b-860f-4a97-8edc-9e6da677cc0e MY_CLONE 00:07:57.622 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=905c44a0-a346-4428-bcf1-7073f1a43f4e 00:07:57.622 13:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 905c44a0-a346-4428-bcf1-7073f1a43f4e 00:07:58.188 13:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64463 00:08:06.334 Initializing NVMe Controllers 00:08:06.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:06.334 Controller IO queue size 128, less than required. 00:08:06.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:06.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:06.334 Initialization complete. Launching workers. 00:08:06.334 ======================================================== 00:08:06.334 Latency(us) 00:08:06.334 Device Information : IOPS MiB/s Average min max 00:08:06.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10247.00 40.03 12503.23 2254.28 62550.76 00:08:06.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10276.90 40.14 12467.57 3629.47 66209.95 00:08:06.334 ======================================================== 00:08:06.334 Total : 20523.90 80.17 12485.37 2254.28 66209.95 00:08:06.334 00:08:06.334 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.334 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 368e34f4-078e-483b-b22e-0f67f707b584 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b0d41bd-12a4-4d9b-bdfb-e91bf035e825 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:06.901 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:06.901 rmmod nvme_tcp 00:08:06.901 rmmod nvme_fabrics 00:08:07.187 rmmod nvme_keyring 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64387 ']' 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64387 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 64387 ']' 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 64387 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64387 00:08:07.187 killing process with pid 64387 00:08:07.187 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.188 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.188 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64387' 00:08:07.188 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 64387 00:08:07.188 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 64387 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:07.446 ************************************ 00:08:07.446 END TEST nvmf_lvol 00:08:07.446 ************************************ 00:08:07.446 00:08:07.446 real 0m15.828s 00:08:07.446 user 1m5.796s 00:08:07.446 sys 0m4.155s 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.446 ************************************ 00:08:07.446 START TEST nvmf_lvs_grow 00:08:07.446 ************************************ 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:07.446 * Looking for test storage... 00:08:07.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.446 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:07.447 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:07.706 Cannot find device "nvmf_tgt_br" 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.706 Cannot find device "nvmf_tgt_br2" 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:07.706 Cannot find device "nvmf_tgt_br" 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:07.706 Cannot find device "nvmf_tgt_br2" 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:07.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:07.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:07.706 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:07.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:07.965 00:08:07.965 --- 10.0.0.2 ping statistics --- 00:08:07.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.965 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:07.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:08:07.965 00:08:07.965 --- 10.0.0.3 ping statistics --- 00:08:07.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.965 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:07.965 00:08:07.965 --- 10.0.0.1 ping statistics --- 00:08:07.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.965 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=64784 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 64784 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 64784 ']' 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.965 13:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.965 [2024-07-25 13:50:56.869323] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:07.965 [2024-07-25 13:50:56.869406] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.223 [2024-07-25 13:50:57.011702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.223 [2024-07-25 13:50:57.138718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.223 [2024-07-25 13:50:57.138786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.223 [2024-07-25 13:50:57.138809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.223 [2024-07-25 13:50:57.138820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.223 [2024-07-25 13:50:57.138829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.223 [2024-07-25 13:50:57.138876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.223 [2024-07-25 13:50:57.197783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.156 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.156 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:09.156 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.156 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.156 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.156 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.156 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:09.156 [2024-07-25 13:50:58.162931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.156 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:09.156 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.156 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.156 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.414 ************************************ 00:08:09.414 START TEST lvs_grow_clean 00:08:09.414 ************************************ 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:09.414 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.673 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:09.673 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:09.931 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:09.931 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:09.931 13:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:10.190 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:10.190 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:10.190 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 lvol 150 00:08:10.448 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=357b5271-8a58-465d-badd-10fedd13b9de 00:08:10.448 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:10.448 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:10.705 [2024-07-25 13:50:59.587340] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:10.705 [2024-07-25 13:50:59.587462] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:10.705 true 00:08:10.705 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:10.705 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:10.963 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:10.963 13:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.220 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 357b5271-8a58-465d-badd-10fedd13b9de 00:08:11.478 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:11.737 [2024-07-25 13:51:00.640519] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.737 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64867 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64867 /var/tmp/bdevperf.sock 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 64867 ']' 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.996 13:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:11.996 [2024-07-25 13:51:00.950293] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:11.996 [2024-07-25 13:51:00.950430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64867 ] 00:08:12.255 [2024-07-25 13:51:01.088691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.255 [2024-07-25 13:51:01.198155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.255 [2024-07-25 13:51:01.257139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.189 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.189 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:13.189 13:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:13.447 Nvme0n1 00:08:13.447 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:13.706 [ 00:08:13.706 { 00:08:13.706 "name": "Nvme0n1", 00:08:13.706 "aliases": [ 00:08:13.706 "357b5271-8a58-465d-badd-10fedd13b9de" 00:08:13.706 ], 00:08:13.706 "product_name": "NVMe disk", 00:08:13.706 "block_size": 4096, 00:08:13.706 "num_blocks": 38912, 00:08:13.706 "uuid": "357b5271-8a58-465d-badd-10fedd13b9de", 00:08:13.706 "assigned_rate_limits": { 00:08:13.706 "rw_ios_per_sec": 0, 00:08:13.706 "rw_mbytes_per_sec": 0, 00:08:13.706 "r_mbytes_per_sec": 0, 00:08:13.706 "w_mbytes_per_sec": 0 00:08:13.706 }, 00:08:13.706 "claimed": false, 00:08:13.706 "zoned": false, 00:08:13.706 "supported_io_types": { 00:08:13.706 "read": true, 00:08:13.706 "write": true, 00:08:13.706 "unmap": true, 00:08:13.706 "flush": true, 00:08:13.706 "reset": true, 00:08:13.706 "nvme_admin": true, 00:08:13.706 "nvme_io": true, 00:08:13.706 "nvme_io_md": false, 00:08:13.706 "write_zeroes": true, 00:08:13.706 "zcopy": false, 00:08:13.706 "get_zone_info": false, 00:08:13.706 "zone_management": false, 00:08:13.706 "zone_append": false, 00:08:13.706 "compare": true, 00:08:13.706 "compare_and_write": true, 00:08:13.706 "abort": true, 00:08:13.706 "seek_hole": false, 00:08:13.706 "seek_data": false, 00:08:13.706 "copy": true, 00:08:13.706 "nvme_iov_md": false 00:08:13.706 }, 00:08:13.706 "memory_domains": [ 00:08:13.706 { 00:08:13.706 "dma_device_id": "system", 00:08:13.706 "dma_device_type": 1 00:08:13.706 } 00:08:13.706 ], 00:08:13.706 "driver_specific": { 00:08:13.706 "nvme": [ 00:08:13.706 { 00:08:13.706 "trid": { 00:08:13.706 "trtype": "TCP", 00:08:13.706 "adrfam": "IPv4", 00:08:13.706 "traddr": "10.0.0.2", 00:08:13.706 "trsvcid": "4420", 00:08:13.706 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:13.706 }, 00:08:13.706 "ctrlr_data": { 00:08:13.706 "cntlid": 1, 00:08:13.706 "vendor_id": "0x8086", 00:08:13.706 "model_number": "SPDK bdev Controller", 00:08:13.706 "serial_number": "SPDK0", 00:08:13.706 "firmware_revision": "24.09", 00:08:13.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.706 "oacs": { 00:08:13.706 "security": 0, 00:08:13.706 "format": 0, 00:08:13.706 "firmware": 0, 00:08:13.706 "ns_manage": 0 00:08:13.706 }, 00:08:13.706 "multi_ctrlr": true, 00:08:13.706 "ana_reporting": false 00:08:13.706 }, 00:08:13.706 "vs": { 00:08:13.706 "nvme_version": "1.3" 00:08:13.706 }, 00:08:13.706 "ns_data": { 00:08:13.706 "id": 1, 00:08:13.706 "can_share": true 00:08:13.706 } 00:08:13.706 } 00:08:13.706 ], 00:08:13.706 "mp_policy": "active_passive" 00:08:13.706 } 00:08:13.706 } 00:08:13.706 ] 00:08:13.706 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64896 00:08:13.706 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.706 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:13.706 Running I/O for 10 seconds... 00:08:15.081 Latency(us) 00:08:15.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.081 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:15.081 =================================================================================================================== 00:08:15.081 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:15.081 00:08:15.648 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:15.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.907 Nvme0n1 : 2.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:15.907 =================================================================================================================== 00:08:15.907 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:15.907 00:08:15.907 true 00:08:15.907 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:15.907 13:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:16.165 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:16.165 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:16.165 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 64896 00:08:16.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.731 Nvme0n1 : 3.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:08:16.731 =================================================================================================================== 00:08:16.731 Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:08:16.731 00:08:17.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.719 Nvme0n1 : 4.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:17.719 =================================================================================================================== 00:08:17.719 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:17.719 00:08:19.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.095 Nvme0n1 : 5.00 7543.80 29.47 0.00 0.00 0.00 0.00 0.00 00:08:19.095 =================================================================================================================== 00:08:19.095 Total : 7543.80 29.47 0.00 0.00 0.00 0.00 0.00 00:08:19.095 00:08:20.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.031 Nvme0n1 : 6.00 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:20.031 =================================================================================================================== 00:08:20.031 Total : 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:20.031 00:08:20.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.967 Nvme0n1 : 7.00 7511.14 29.34 0.00 0.00 0.00 0.00 0.00 00:08:20.967 =================================================================================================================== 00:08:20.967 Total : 7511.14 29.34 0.00 0.00 0.00 0.00 0.00 00:08:20.967 00:08:21.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.903 Nvme0n1 : 8.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:21.903 =================================================================================================================== 00:08:21.903 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:21.903 00:08:22.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.947 Nvme0n1 : 9.00 7507.11 29.32 0.00 0.00 0.00 0.00 0.00 00:08:22.947 =================================================================================================================== 00:08:22.947 Total : 7507.11 29.32 0.00 0.00 0.00 0.00 0.00 00:08:22.947 00:08:23.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.883 Nvme0n1 : 10.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:23.883 =================================================================================================================== 00:08:23.883 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:23.883 00:08:23.883 00:08:23.883 Latency(us) 00:08:23.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.883 Nvme0n1 : 10.02 7491.40 29.26 0.00 0.00 17081.26 14298.76 39083.29 00:08:23.883 =================================================================================================================== 00:08:23.883 Total : 7491.40 29.26 0.00 0.00 17081.26 14298.76 39083.29 00:08:23.883 0 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64867 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 64867 ']' 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 64867 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64867 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:23.883 killing process with pid 64867 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64867' 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 64867 00:08:23.883 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.883 00:08:23.883 Latency(us) 00:08:23.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.883 =================================================================================================================== 00:08:23.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.883 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 64867 00:08:24.142 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.400 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:24.659 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:24.659 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:24.918 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:24.918 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:24.918 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.176 [2024-07-25 13:51:13.999992] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:25.176 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:25.435 request: 00:08:25.435 { 00:08:25.435 "uuid": "660091a8-fbb2-46ab-bfef-b9af6fd5ef99", 00:08:25.435 "method": "bdev_lvol_get_lvstores", 00:08:25.435 "req_id": 1 00:08:25.435 } 00:08:25.435 Got JSON-RPC error response 00:08:25.435 response: 00:08:25.435 { 00:08:25.435 "code": -19, 00:08:25.435 "message": "No such device" 00:08:25.435 } 00:08:25.435 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:25.435 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.435 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.435 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.435 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.694 aio_bdev 00:08:25.694 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 357b5271-8a58-465d-badd-10fedd13b9de 00:08:25.694 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=357b5271-8a58-465d-badd-10fedd13b9de 00:08:25.694 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.694 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:25.694 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.694 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.694 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.952 13:51:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 357b5271-8a58-465d-badd-10fedd13b9de -t 2000 00:08:26.211 [ 00:08:26.211 { 00:08:26.211 "name": "357b5271-8a58-465d-badd-10fedd13b9de", 00:08:26.211 "aliases": [ 00:08:26.211 "lvs/lvol" 00:08:26.211 ], 00:08:26.211 "product_name": "Logical Volume", 00:08:26.211 "block_size": 4096, 00:08:26.211 "num_blocks": 38912, 00:08:26.211 "uuid": "357b5271-8a58-465d-badd-10fedd13b9de", 00:08:26.211 "assigned_rate_limits": { 00:08:26.211 "rw_ios_per_sec": 0, 00:08:26.211 "rw_mbytes_per_sec": 0, 00:08:26.211 "r_mbytes_per_sec": 0, 00:08:26.211 "w_mbytes_per_sec": 0 00:08:26.211 }, 00:08:26.211 "claimed": false, 00:08:26.211 "zoned": false, 00:08:26.211 "supported_io_types": { 00:08:26.211 "read": true, 00:08:26.211 "write": true, 00:08:26.211 "unmap": true, 00:08:26.211 "flush": false, 00:08:26.211 "reset": true, 00:08:26.211 "nvme_admin": false, 00:08:26.211 "nvme_io": false, 00:08:26.211 "nvme_io_md": false, 00:08:26.211 "write_zeroes": true, 00:08:26.211 "zcopy": false, 00:08:26.211 "get_zone_info": false, 00:08:26.211 "zone_management": false, 00:08:26.211 "zone_append": false, 00:08:26.211 "compare": false, 00:08:26.211 "compare_and_write": false, 00:08:26.211 "abort": false, 00:08:26.211 "seek_hole": true, 00:08:26.211 "seek_data": true, 00:08:26.211 "copy": false, 00:08:26.211 "nvme_iov_md": false 00:08:26.211 }, 00:08:26.211 "driver_specific": { 00:08:26.211 "lvol": { 00:08:26.211 "lvol_store_uuid": "660091a8-fbb2-46ab-bfef-b9af6fd5ef99", 00:08:26.211 "base_bdev": "aio_bdev", 00:08:26.211 "thin_provision": false, 00:08:26.211 "num_allocated_clusters": 38, 00:08:26.211 "snapshot": false, 00:08:26.211 "clone": false, 00:08:26.211 "esnap_clone": false 00:08:26.211 } 00:08:26.211 } 00:08:26.211 } 00:08:26.211 ] 00:08:26.211 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:26.211 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:26.211 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:26.469 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:26.469 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:26.469 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:26.727 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:26.727 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 357b5271-8a58-465d-badd-10fedd13b9de 00:08:26.985 13:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 660091a8-fbb2-46ab-bfef-b9af6fd5ef99 00:08:27.243 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:27.502 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.069 ************************************ 00:08:28.069 END TEST lvs_grow_clean 00:08:28.069 ************************************ 00:08:28.069 00:08:28.069 real 0m18.691s 00:08:28.069 user 0m17.514s 00:08:28.069 sys 0m2.703s 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.069 ************************************ 00:08:28.069 START TEST lvs_grow_dirty 00:08:28.069 ************************************ 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.069 13:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.328 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:28.328 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:28.587 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=368017b4-a477-41c0-8060-2c124789ea78 00:08:28.587 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:28.587 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:28.846 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:28.846 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:28.846 13:51:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 368017b4-a477-41c0-8060-2c124789ea78 lvol 150 00:08:29.105 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=474e1883-ddb4-468c-81aa-7d99002fe5e5 00:08:29.105 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.105 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:29.363 [2024-07-25 13:51:18.340301] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:29.363 [2024-07-25 13:51:18.340390] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:29.363 true 00:08:29.363 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:29.363 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:29.621 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:29.621 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:29.880 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 474e1883-ddb4-468c-81aa-7d99002fe5e5 00:08:30.447 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:30.447 [2024-07-25 13:51:19.400951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.447 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65150 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:30.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65150 /var/tmp/bdevperf.sock 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65150 ']' 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.706 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:30.965 [2024-07-25 13:51:19.743475] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:30.965 [2024-07-25 13:51:19.743753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65150 ] 00:08:30.965 [2024-07-25 13:51:19.875556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.965 [2024-07-25 13:51:19.979817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.222 [2024-07-25 13:51:20.035063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.790 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.790 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:31.790 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:32.359 Nvme0n1 00:08:32.359 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:32.359 [ 00:08:32.359 { 00:08:32.359 "name": "Nvme0n1", 00:08:32.359 "aliases": [ 00:08:32.359 "474e1883-ddb4-468c-81aa-7d99002fe5e5" 00:08:32.359 ], 00:08:32.359 "product_name": "NVMe disk", 00:08:32.359 "block_size": 4096, 00:08:32.359 "num_blocks": 38912, 00:08:32.359 "uuid": "474e1883-ddb4-468c-81aa-7d99002fe5e5", 00:08:32.359 "assigned_rate_limits": { 00:08:32.359 "rw_ios_per_sec": 0, 00:08:32.359 "rw_mbytes_per_sec": 0, 00:08:32.359 "r_mbytes_per_sec": 0, 00:08:32.359 "w_mbytes_per_sec": 0 00:08:32.359 }, 00:08:32.359 "claimed": false, 00:08:32.359 "zoned": false, 00:08:32.359 "supported_io_types": { 00:08:32.359 "read": true, 00:08:32.359 "write": true, 00:08:32.359 "unmap": true, 00:08:32.359 "flush": true, 00:08:32.359 "reset": true, 00:08:32.359 "nvme_admin": true, 00:08:32.359 "nvme_io": true, 00:08:32.359 "nvme_io_md": false, 00:08:32.359 "write_zeroes": true, 00:08:32.359 "zcopy": false, 00:08:32.359 "get_zone_info": false, 00:08:32.359 "zone_management": false, 00:08:32.359 "zone_append": false, 00:08:32.359 "compare": true, 00:08:32.359 "compare_and_write": true, 00:08:32.359 "abort": true, 00:08:32.359 "seek_hole": false, 00:08:32.359 "seek_data": false, 00:08:32.359 "copy": true, 00:08:32.359 "nvme_iov_md": false 00:08:32.359 }, 00:08:32.359 "memory_domains": [ 00:08:32.359 { 00:08:32.359 "dma_device_id": "system", 00:08:32.359 "dma_device_type": 1 00:08:32.359 } 00:08:32.359 ], 00:08:32.359 "driver_specific": { 00:08:32.359 "nvme": [ 00:08:32.359 { 00:08:32.359 "trid": { 00:08:32.359 "trtype": "TCP", 00:08:32.359 "adrfam": "IPv4", 00:08:32.359 "traddr": "10.0.0.2", 00:08:32.359 "trsvcid": "4420", 00:08:32.359 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:32.359 }, 00:08:32.359 "ctrlr_data": { 00:08:32.359 "cntlid": 1, 00:08:32.359 "vendor_id": "0x8086", 00:08:32.359 "model_number": "SPDK bdev Controller", 00:08:32.359 "serial_number": "SPDK0", 00:08:32.359 "firmware_revision": "24.09", 00:08:32.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.359 "oacs": { 00:08:32.359 "security": 0, 00:08:32.359 "format": 0, 00:08:32.359 "firmware": 0, 00:08:32.359 "ns_manage": 0 00:08:32.359 }, 00:08:32.359 "multi_ctrlr": true, 00:08:32.359 "ana_reporting": false 00:08:32.359 }, 00:08:32.359 "vs": { 00:08:32.359 "nvme_version": "1.3" 00:08:32.359 }, 00:08:32.359 "ns_data": { 00:08:32.359 "id": 1, 00:08:32.359 "can_share": true 00:08:32.359 } 00:08:32.359 } 00:08:32.359 ], 00:08:32.359 "mp_policy": "active_passive" 00:08:32.359 } 00:08:32.359 } 00:08:32.359 ] 00:08:32.359 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65169 00:08:32.359 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.359 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:32.618 Running I/O for 10 seconds... 00:08:33.553 Latency(us) 00:08:33.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.553 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:33.553 =================================================================================================================== 00:08:33.554 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:33.554 00:08:34.487 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:34.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.487 Nvme0n1 : 2.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:34.487 =================================================================================================================== 00:08:34.487 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:34.487 00:08:34.744 true 00:08:34.744 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:34.744 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:35.001 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:35.001 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:35.001 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65169 00:08:35.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.567 Nvme0n1 : 3.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:35.567 =================================================================================================================== 00:08:35.567 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:35.567 00:08:36.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.500 Nvme0n1 : 4.00 7715.25 30.14 0.00 0.00 0.00 0.00 0.00 00:08:36.500 =================================================================================================================== 00:08:36.500 Total : 7715.25 30.14 0.00 0.00 0.00 0.00 0.00 00:08:36.500 00:08:37.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.875 Nvme0n1 : 5.00 7696.20 30.06 0.00 0.00 0.00 0.00 0.00 00:08:37.875 =================================================================================================================== 00:08:37.875 Total : 7696.20 30.06 0.00 0.00 0.00 0.00 0.00 00:08:37.875 00:08:38.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.442 Nvme0n1 : 6.00 7618.17 29.76 0.00 0.00 0.00 0.00 0.00 00:08:38.442 =================================================================================================================== 00:08:38.442 Total : 7618.17 29.76 0.00 0.00 0.00 0.00 0.00 00:08:38.442 00:08:39.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.815 Nvme0n1 : 7.00 7545.86 29.48 0.00 0.00 0.00 0.00 0.00 00:08:39.815 =================================================================================================================== 00:08:39.815 Total : 7545.86 29.48 0.00 0.00 0.00 0.00 0.00 00:08:39.815 00:08:40.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.748 Nvme0n1 : 8.00 7523.38 29.39 0.00 0.00 0.00 0.00 0.00 00:08:40.748 =================================================================================================================== 00:08:40.748 Total : 7523.38 29.39 0.00 0.00 0.00 0.00 0.00 00:08:40.748 00:08:41.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.701 Nvme0n1 : 9.00 7491.78 29.26 0.00 0.00 0.00 0.00 0.00 00:08:41.701 =================================================================================================================== 00:08:41.701 Total : 7491.78 29.26 0.00 0.00 0.00 0.00 0.00 00:08:41.701 00:08:42.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.632 Nvme0n1 : 10.00 7466.50 29.17 0.00 0.00 0.00 0.00 0.00 00:08:42.632 =================================================================================================================== 00:08:42.632 Total : 7466.50 29.17 0.00 0.00 0.00 0.00 0.00 00:08:42.633 00:08:42.633 00:08:42.633 Latency(us) 00:08:42.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.633 Nvme0n1 : 10.00 7476.80 29.21 0.00 0.00 17114.09 6106.76 44087.85 00:08:42.633 =================================================================================================================== 00:08:42.633 Total : 7476.80 29.21 0.00 0.00 17114.09 6106.76 44087.85 00:08:42.633 0 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65150 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 65150 ']' 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 65150 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65150 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:42.633 killing process with pid 65150 00:08:42.633 Received shutdown signal, test time was about 10.000000 seconds 00:08:42.633 00:08:42.633 Latency(us) 00:08:42.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.633 =================================================================================================================== 00:08:42.633 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65150' 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 65150 00:08:42.633 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 65150 00:08:42.890 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.147 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.405 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:43.405 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:43.662 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64784 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64784 00:08:43.663 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64784 Killed "${NVMF_APP[@]}" "$@" 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65311 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65311 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65311 ']' 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.663 13:51:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.663 [2024-07-25 13:51:32.666499] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:43.663 [2024-07-25 13:51:32.667439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.921 [2024-07-25 13:51:32.817619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.921 [2024-07-25 13:51:32.949788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.921 [2024-07-25 13:51:32.949846] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.921 [2024-07-25 13:51:32.949869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.921 [2024-07-25 13:51:32.949881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.921 [2024-07-25 13:51:32.949890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.921 [2024-07-25 13:51:32.949927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.179 [2024-07-25 13:51:33.011380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:44.750 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.750 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:44.750 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:44.750 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:44.750 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.750 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.750 13:51:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.007 [2024-07-25 13:51:33.991327] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.007 [2024-07-25 13:51:33.991611] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.007 [2024-07-25 13:51:33.991822] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.007 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:45.007 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 474e1883-ddb4-468c-81aa-7d99002fe5e5 00:08:45.007 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=474e1883-ddb4-468c-81aa-7d99002fe5e5 00:08:45.007 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.007 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:45.007 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.007 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.007 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.573 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 474e1883-ddb4-468c-81aa-7d99002fe5e5 -t 2000 00:08:45.573 [ 00:08:45.573 { 00:08:45.573 "name": "474e1883-ddb4-468c-81aa-7d99002fe5e5", 00:08:45.573 "aliases": [ 00:08:45.573 "lvs/lvol" 00:08:45.573 ], 00:08:45.573 "product_name": "Logical Volume", 00:08:45.573 "block_size": 4096, 00:08:45.573 "num_blocks": 38912, 00:08:45.573 "uuid": "474e1883-ddb4-468c-81aa-7d99002fe5e5", 00:08:45.573 "assigned_rate_limits": { 00:08:45.573 "rw_ios_per_sec": 0, 00:08:45.573 "rw_mbytes_per_sec": 0, 00:08:45.573 "r_mbytes_per_sec": 0, 00:08:45.573 "w_mbytes_per_sec": 0 00:08:45.573 }, 00:08:45.573 "claimed": false, 00:08:45.573 "zoned": false, 00:08:45.573 "supported_io_types": { 00:08:45.573 "read": true, 00:08:45.573 "write": true, 00:08:45.573 "unmap": true, 00:08:45.573 "flush": false, 00:08:45.573 "reset": true, 00:08:45.573 "nvme_admin": false, 00:08:45.573 "nvme_io": false, 00:08:45.573 "nvme_io_md": false, 00:08:45.573 "write_zeroes": true, 00:08:45.573 "zcopy": false, 00:08:45.573 "get_zone_info": false, 00:08:45.573 "zone_management": false, 00:08:45.573 "zone_append": false, 00:08:45.573 "compare": false, 00:08:45.573 "compare_and_write": false, 00:08:45.573 "abort": false, 00:08:45.573 "seek_hole": true, 00:08:45.573 "seek_data": true, 00:08:45.573 "copy": false, 00:08:45.573 "nvme_iov_md": false 00:08:45.573 }, 00:08:45.573 "driver_specific": { 00:08:45.573 "lvol": { 00:08:45.573 "lvol_store_uuid": "368017b4-a477-41c0-8060-2c124789ea78", 00:08:45.573 "base_bdev": "aio_bdev", 00:08:45.573 "thin_provision": false, 00:08:45.573 "num_allocated_clusters": 38, 00:08:45.573 "snapshot": false, 00:08:45.573 "clone": false, 00:08:45.573 "esnap_clone": false 00:08:45.573 } 00:08:45.573 } 00:08:45.573 } 00:08:45.573 ] 00:08:45.846 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:45.846 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:45.846 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:46.121 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:46.121 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:46.121 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:46.378 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:46.378 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.635 [2024-07-25 13:51:35.432806] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.635 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:46.635 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:46.635 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:46.635 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.635 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.635 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.635 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.636 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.636 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.636 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.636 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:46.636 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:46.892 request: 00:08:46.892 { 00:08:46.892 "uuid": "368017b4-a477-41c0-8060-2c124789ea78", 00:08:46.892 "method": "bdev_lvol_get_lvstores", 00:08:46.892 "req_id": 1 00:08:46.892 } 00:08:46.892 Got JSON-RPC error response 00:08:46.892 response: 00:08:46.892 { 00:08:46.893 "code": -19, 00:08:46.893 "message": "No such device" 00:08:46.893 } 00:08:46.893 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:46.893 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.893 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:46.893 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.893 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.151 aio_bdev 00:08:47.151 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 474e1883-ddb4-468c-81aa-7d99002fe5e5 00:08:47.151 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=474e1883-ddb4-468c-81aa-7d99002fe5e5 00:08:47.151 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.151 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:47.151 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.151 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.151 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.409 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 474e1883-ddb4-468c-81aa-7d99002fe5e5 -t 2000 00:08:47.667 [ 00:08:47.667 { 00:08:47.667 "name": "474e1883-ddb4-468c-81aa-7d99002fe5e5", 00:08:47.667 "aliases": [ 00:08:47.667 "lvs/lvol" 00:08:47.667 ], 00:08:47.667 "product_name": "Logical Volume", 00:08:47.667 "block_size": 4096, 00:08:47.667 "num_blocks": 38912, 00:08:47.667 "uuid": "474e1883-ddb4-468c-81aa-7d99002fe5e5", 00:08:47.667 "assigned_rate_limits": { 00:08:47.667 "rw_ios_per_sec": 0, 00:08:47.667 "rw_mbytes_per_sec": 0, 00:08:47.667 "r_mbytes_per_sec": 0, 00:08:47.667 "w_mbytes_per_sec": 0 00:08:47.667 }, 00:08:47.667 "claimed": false, 00:08:47.667 "zoned": false, 00:08:47.667 "supported_io_types": { 00:08:47.667 "read": true, 00:08:47.667 "write": true, 00:08:47.667 "unmap": true, 00:08:47.667 "flush": false, 00:08:47.667 "reset": true, 00:08:47.667 "nvme_admin": false, 00:08:47.667 "nvme_io": false, 00:08:47.667 "nvme_io_md": false, 00:08:47.667 "write_zeroes": true, 00:08:47.667 "zcopy": false, 00:08:47.667 "get_zone_info": false, 00:08:47.667 "zone_management": false, 00:08:47.667 "zone_append": false, 00:08:47.667 "compare": false, 00:08:47.667 "compare_and_write": false, 00:08:47.667 "abort": false, 00:08:47.667 "seek_hole": true, 00:08:47.667 "seek_data": true, 00:08:47.667 "copy": false, 00:08:47.667 "nvme_iov_md": false 00:08:47.667 }, 00:08:47.667 "driver_specific": { 00:08:47.667 "lvol": { 00:08:47.667 "lvol_store_uuid": "368017b4-a477-41c0-8060-2c124789ea78", 00:08:47.667 "base_bdev": "aio_bdev", 00:08:47.667 "thin_provision": false, 00:08:47.667 "num_allocated_clusters": 38, 00:08:47.667 "snapshot": false, 00:08:47.667 "clone": false, 00:08:47.667 "esnap_clone": false 00:08:47.667 } 00:08:47.667 } 00:08:47.667 } 00:08:47.667 ] 00:08:47.667 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:47.667 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:47.667 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.925 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.925 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:47.925 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:48.184 13:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:48.184 13:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 474e1883-ddb4-468c-81aa-7d99002fe5e5 00:08:48.442 13:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 368017b4-a477-41c0-8060-2c124789ea78 00:08:48.699 13:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.957 13:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.522 00:08:49.522 real 0m21.409s 00:08:49.522 user 0m44.895s 00:08:49.522 sys 0m8.257s 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.522 ************************************ 00:08:49.522 END TEST lvs_grow_dirty 00:08:49.522 ************************************ 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:49.522 nvmf_trace.0 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.522 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:49.779 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.779 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:49.779 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.779 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.779 rmmod nvme_tcp 00:08:49.779 rmmod nvme_fabrics 00:08:49.779 rmmod nvme_keyring 00:08:49.779 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.779 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:49.779 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:49.779 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65311 ']' 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65311 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 65311 ']' 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 65311 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65311 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.780 killing process with pid 65311 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65311' 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 65311 00:08:49.780 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 65311 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.038 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:50.038 00:08:50.038 real 0m42.540s 00:08:50.039 user 1m9.174s 00:08:50.039 sys 0m11.647s 00:08:50.039 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.039 ************************************ 00:08:50.039 END TEST nvmf_lvs_grow 00:08:50.039 ************************************ 00:08:50.039 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.039 13:51:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.039 13:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:50.039 13:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.039 13:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.039 ************************************ 00:08:50.039 START TEST nvmf_bdev_io_wait 00:08:50.039 ************************************ 00:08:50.039 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.039 * Looking for test storage... 00:08:50.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.039 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.040 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.040 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.040 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.040 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:50.040 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:50.040 Cannot find device "nvmf_tgt_br" 00:08:50.040 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:50.040 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.297 Cannot find device "nvmf_tgt_br2" 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:50.297 Cannot find device "nvmf_tgt_br" 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:50.297 Cannot find device "nvmf_tgt_br2" 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:50.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:50.297 00:08:50.297 --- 10.0.0.2 ping statistics --- 00:08:50.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.297 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:50.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:08:50.297 00:08:50.297 --- 10.0.0.3 ping statistics --- 00:08:50.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.297 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:50.297 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:50.557 00:08:50.557 --- 10.0.0.1 ping statistics --- 00:08:50.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.557 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65622 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65622 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 65622 ']' 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.557 13:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.557 [2024-07-25 13:51:39.410917] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:50.557 [2024-07-25 13:51:39.410998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.557 [2024-07-25 13:51:39.547150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.815 [2024-07-25 13:51:39.668401] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.815 [2024-07-25 13:51:39.668463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.815 [2024-07-25 13:51:39.668476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.815 [2024-07-25 13:51:39.668485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.815 [2024-07-25 13:51:39.668492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.815 [2024-07-25 13:51:39.668695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.815 [2024-07-25 13:51:39.668868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.815 [2024-07-25 13:51:39.668870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.815 [2024-07-25 13:51:39.668769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 [2024-07-25 13:51:40.513778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 [2024-07-25 13:51:40.529977] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.750 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 Malloc0 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.751 [2024-07-25 13:51:40.590146] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65657 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65659 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.751 { 00:08:51.751 "params": { 00:08:51.751 "name": "Nvme$subsystem", 00:08:51.751 "trtype": "$TEST_TRANSPORT", 00:08:51.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.751 "adrfam": "ipv4", 00:08:51.751 "trsvcid": "$NVMF_PORT", 00:08:51.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.751 "hdgst": ${hdgst:-false}, 00:08:51.751 "ddgst": ${ddgst:-false} 00:08:51.751 }, 00:08:51.751 "method": "bdev_nvme_attach_controller" 00:08:51.751 } 00:08:51.751 EOF 00:08:51.751 )") 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.751 { 00:08:51.751 "params": { 00:08:51.751 "name": "Nvme$subsystem", 00:08:51.751 "trtype": "$TEST_TRANSPORT", 00:08:51.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.751 "adrfam": "ipv4", 00:08:51.751 "trsvcid": "$NVMF_PORT", 00:08:51.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.751 "hdgst": ${hdgst:-false}, 00:08:51.751 "ddgst": ${ddgst:-false} 00:08:51.751 }, 00:08:51.751 "method": "bdev_nvme_attach_controller" 00:08:51.751 } 00:08:51.751 EOF 00:08:51.751 )") 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65662 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65666 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.751 { 00:08:51.751 "params": { 00:08:51.751 "name": "Nvme$subsystem", 00:08:51.751 "trtype": "$TEST_TRANSPORT", 00:08:51.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.751 "adrfam": "ipv4", 00:08:51.751 "trsvcid": "$NVMF_PORT", 00:08:51.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.751 "hdgst": ${hdgst:-false}, 00:08:51.751 "ddgst": ${ddgst:-false} 00:08:51.751 }, 00:08:51.751 "method": "bdev_nvme_attach_controller" 00:08:51.751 } 00:08:51.751 EOF 00:08:51.751 )") 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.751 { 00:08:51.751 "params": { 00:08:51.751 "name": "Nvme$subsystem", 00:08:51.751 "trtype": "$TEST_TRANSPORT", 00:08:51.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.751 "adrfam": "ipv4", 00:08:51.751 "trsvcid": "$NVMF_PORT", 00:08:51.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.751 "hdgst": ${hdgst:-false}, 00:08:51.751 "ddgst": ${ddgst:-false} 00:08:51.751 }, 00:08:51.751 "method": "bdev_nvme_attach_controller" 00:08:51.751 } 00:08:51.751 EOF 00:08:51.751 )") 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.751 "params": { 00:08:51.751 "name": "Nvme1", 00:08:51.751 "trtype": "tcp", 00:08:51.751 "traddr": "10.0.0.2", 00:08:51.751 "adrfam": "ipv4", 00:08:51.751 "trsvcid": "4420", 00:08:51.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.751 "hdgst": false, 00:08:51.751 "ddgst": false 00:08:51.751 }, 00:08:51.751 "method": "bdev_nvme_attach_controller" 00:08:51.751 }' 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.751 "params": { 00:08:51.751 "name": "Nvme1", 00:08:51.751 "trtype": "tcp", 00:08:51.751 "traddr": "10.0.0.2", 00:08:51.751 "adrfam": "ipv4", 00:08:51.751 "trsvcid": "4420", 00:08:51.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.751 "hdgst": false, 00:08:51.751 "ddgst": false 00:08:51.751 }, 00:08:51.751 "method": "bdev_nvme_attach_controller" 00:08:51.751 }' 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.751 "params": { 00:08:51.751 "name": "Nvme1", 00:08:51.751 "trtype": "tcp", 00:08:51.751 "traddr": "10.0.0.2", 00:08:51.751 "adrfam": "ipv4", 00:08:51.751 "trsvcid": "4420", 00:08:51.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.751 "hdgst": false, 00:08:51.751 "ddgst": false 00:08:51.751 }, 00:08:51.751 "method": "bdev_nvme_attach_controller" 00:08:51.751 }' 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:51.751 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.751 "params": { 00:08:51.751 "name": "Nvme1", 00:08:51.751 "trtype": "tcp", 00:08:51.751 "traddr": "10.0.0.2", 00:08:51.751 "adrfam": "ipv4", 00:08:51.751 "trsvcid": "4420", 00:08:51.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.752 "hdgst": false, 00:08:51.752 "ddgst": false 00:08:51.752 }, 00:08:51.752 "method": "bdev_nvme_attach_controller" 00:08:51.752 }' 00:08:51.752 13:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65657 00:08:51.752 [2024-07-25 13:51:40.671842] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:51.752 [2024-07-25 13:51:40.672217] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:51.752 [2024-07-25 13:51:40.672912] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:51.752 [2024-07-25 13:51:40.673162] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:51.752 [2024-07-25 13:51:40.705489] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:51.752 [2024-07-25 13:51:40.705943] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:51.752 [2024-07-25 13:51:40.714957] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:51.752 [2024-07-25 13:51:40.715084] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:52.010 [2024-07-25 13:51:40.887248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.010 [2024-07-25 13:51:40.971103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.010 [2024-07-25 13:51:40.979851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:52.268 [2024-07-25 13:51:41.045894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.268 [2024-07-25 13:51:41.058348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.268 [2024-07-25 13:51:41.067474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.268 [2024-07-25 13:51:41.115611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.268 [2024-07-25 13:51:41.136609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.268 Running I/O for 1 seconds... 00:08:52.268 [2024-07-25 13:51:41.174682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:52.268 Running I/O for 1 seconds... 00:08:52.268 [2024-07-25 13:51:41.227561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.268 [2024-07-25 13:51:41.229352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:52.268 [2024-07-25 13:51:41.276734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.526 Running I/O for 1 seconds... 00:08:52.526 Running I/O for 1 seconds... 00:08:53.459 00:08:53.459 Latency(us) 00:08:53.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.459 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:53.459 Nvme1n1 : 1.02 6835.50 26.70 0.00 0.00 18524.30 8519.68 42419.67 00:08:53.459 =================================================================================================================== 00:08:53.459 Total : 6835.50 26.70 0.00 0.00 18524.30 8519.68 42419.67 00:08:53.459 00:08:53.459 Latency(us) 00:08:53.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.459 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:53.459 Nvme1n1 : 1.01 8476.38 33.11 0.00 0.00 15021.55 7596.22 27882.59 00:08:53.459 =================================================================================================================== 00:08:53.459 Total : 8476.38 33.11 0.00 0.00 15021.55 7596.22 27882.59 00:08:53.459 00:08:53.459 Latency(us) 00:08:53.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.459 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:53.459 Nvme1n1 : 1.00 168443.44 657.98 0.00 0.00 757.02 350.02 1489.45 00:08:53.459 =================================================================================================================== 00:08:53.459 Total : 168443.44 657.98 0.00 0.00 757.02 350.02 1489.45 00:08:53.459 00:08:53.459 Latency(us) 00:08:53.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.459 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:53.459 Nvme1n1 : 1.01 6429.70 25.12 0.00 0.00 19823.34 7745.16 43134.60 00:08:53.459 =================================================================================================================== 00:08:53.459 Total : 6429.70 25.12 0.00 0.00 19823.34 7745.16 43134.60 00:08:53.459 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65659 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65662 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65666 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.718 rmmod nvme_tcp 00:08:53.718 rmmod nvme_fabrics 00:08:53.718 rmmod nvme_keyring 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65622 ']' 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65622 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 65622 ']' 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 65622 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:53.718 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.976 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65622 00:08:53.976 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.976 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.976 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65622' 00:08:53.976 killing process with pid 65622 00:08:53.976 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 65622 00:08:53.976 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 65622 00:08:53.977 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.977 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.977 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.977 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.977 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.977 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.977 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.977 13:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.235 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:54.235 00:08:54.235 real 0m4.092s 00:08:54.235 user 0m18.096s 00:08:54.235 sys 0m2.187s 00:08:54.235 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.235 ************************************ 00:08:54.235 END TEST nvmf_bdev_io_wait 00:08:54.235 ************************************ 00:08:54.235 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.236 ************************************ 00:08:54.236 START TEST nvmf_queue_depth 00:08:54.236 ************************************ 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.236 * Looking for test storage... 00:08:54.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:54.236 Cannot find device "nvmf_tgt_br" 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.236 Cannot find device "nvmf_tgt_br2" 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:54.236 Cannot find device "nvmf_tgt_br" 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:54.236 Cannot find device "nvmf_tgt_br2" 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:54.236 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:54.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:08:54.496 00:08:54.496 --- 10.0.0.2 ping statistics --- 00:08:54.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.496 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:54.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:54.496 00:08:54.496 --- 10.0.0.3 ping statistics --- 00:08:54.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.496 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:54.496 00:08:54.496 --- 10.0.0.1 ping statistics --- 00:08:54.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.496 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.496 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=65907 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 65907 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 65907 ']' 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.755 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.755 [2024-07-25 13:51:43.581900] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:54.755 [2024-07-25 13:51:43.582251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.755 [2024-07-25 13:51:43.722507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.012 [2024-07-25 13:51:43.855654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.012 [2024-07-25 13:51:43.855991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.012 [2024-07-25 13:51:43.856192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.012 [2024-07-25 13:51:43.856369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.012 [2024-07-25 13:51:43.856600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.012 [2024-07-25 13:51:43.856685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.012 [2024-07-25 13:51:43.946449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.626 [2024-07-25 13:51:44.613003] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.626 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.884 Malloc0 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.884 [2024-07-25 13:51:44.679830] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=65939 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 65939 /var/tmp/bdevperf.sock 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 65939 ']' 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.884 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.884 [2024-07-25 13:51:44.750243] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:08:55.884 [2024-07-25 13:51:44.750819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65939 ] 00:08:55.884 [2024-07-25 13:51:44.898239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.143 [2024-07-25 13:51:45.035724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.143 [2024-07-25 13:51:45.094468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.094 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.094 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:57.094 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:57.094 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.094 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.094 NVMe0n1 00:08:57.094 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.094 13:51:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.094 Running I/O for 10 seconds... 00:09:09.288 00:09:09.288 Latency(us) 00:09:09.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.288 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:09.288 Verification LBA range: start 0x0 length 0x4000 00:09:09.288 NVMe0n1 : 10.10 7599.56 29.69 0.00 0.00 134040.41 28240.06 100567.97 00:09:09.288 =================================================================================================================== 00:09:09.288 Total : 7599.56 29.69 0.00 0.00 134040.41 28240.06 100567.97 00:09:09.288 0 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 65939 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 65939 ']' 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 65939 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65939 00:09:09.288 killing process with pid 65939 00:09:09.288 Received shutdown signal, test time was about 10.000000 seconds 00:09:09.288 00:09:09.288 Latency(us) 00:09:09.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.288 =================================================================================================================== 00:09:09.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65939' 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 65939 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 65939 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.288 rmmod nvme_tcp 00:09:09.288 rmmod nvme_fabrics 00:09:09.288 rmmod nvme_keyring 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 65907 ']' 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 65907 00:09:09.288 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 65907 ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 65907 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65907 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:09.289 killing process with pid 65907 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65907' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 65907 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 65907 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:09.289 00:09:09.289 real 0m13.772s 00:09:09.289 user 0m23.685s 00:09:09.289 sys 0m2.474s 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.289 ************************************ 00:09:09.289 END TEST nvmf_queue_depth 00:09:09.289 ************************************ 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.289 ************************************ 00:09:09.289 START TEST nvmf_target_multipath 00:09:09.289 ************************************ 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.289 * Looking for test storage... 00:09:09.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.289 13:51:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.289 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:09.289 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:09.289 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:09.289 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:09.289 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:09.289 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:09.289 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.289 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:09.290 Cannot find device "nvmf_tgt_br" 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.290 Cannot find device "nvmf_tgt_br2" 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:09.290 Cannot find device "nvmf_tgt_br" 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:09.290 Cannot find device "nvmf_tgt_br2" 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:09.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:09:09.290 00:09:09.290 --- 10.0.0.2 ping statistics --- 00:09:09.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.290 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:09.290 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.290 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:09:09.290 00:09:09.290 --- 10.0.0.3 ping statistics --- 00:09:09.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.290 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:09:09.290 00:09:09.290 --- 10.0.0.1 ping statistics --- 00:09:09.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.290 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66268 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66268 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 66268 ']' 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.290 13:51:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.290 [2024-07-25 13:51:57.428602] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:09:09.290 [2024-07-25 13:51:57.429178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.290 [2024-07-25 13:51:57.564914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.290 [2024-07-25 13:51:57.686118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.290 [2024-07-25 13:51:57.686427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.290 [2024-07-25 13:51:57.686468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.290 [2024-07-25 13:51:57.686478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.290 [2024-07-25 13:51:57.686486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.290 [2024-07-25 13:51:57.686629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.291 [2024-07-25 13:51:57.686986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.291 [2024-07-25 13:51:57.687135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.291 [2024-07-25 13:51:57.687158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.291 [2024-07-25 13:51:57.739561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:09.577 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.577 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:09.577 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.577 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.577 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.577 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.577 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.836 [2024-07-25 13:51:58.821227] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.093 13:51:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:10.351 Malloc0 00:09:10.351 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:10.609 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.176 13:51:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.433 [2024-07-25 13:52:00.297190] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.433 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:11.690 [2024-07-25 13:52:00.549423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:11.690 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid=71427938-e211-49fa-b6ad-486cdab0bd89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:11.690 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid=71427938-e211-49fa-b6ad-486cdab0bd89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:11.948 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.948 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.948 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.948 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.948 13:52:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66365 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:13.846 13:52:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:13.846 [global] 00:09:13.846 thread=1 00:09:13.846 invalidate=1 00:09:13.846 rw=randrw 00:09:13.846 time_based=1 00:09:13.846 runtime=6 00:09:13.846 ioengine=libaio 00:09:13.846 direct=1 00:09:13.846 bs=4096 00:09:13.846 iodepth=128 00:09:13.846 norandommap=0 00:09:13.846 numjobs=1 00:09:13.846 00:09:13.846 verify_dump=1 00:09:13.846 verify_backlog=512 00:09:13.846 verify_state_save=0 00:09:13.846 do_verify=1 00:09:13.846 verify=crc32c-intel 00:09:13.846 [job0] 00:09:13.846 filename=/dev/nvme0n1 00:09:14.103 Could not set queue depth (nvme0n1) 00:09:14.103 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.103 fio-3.35 00:09:14.103 Starting 1 thread 00:09:15.035 13:52:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:15.292 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:15.603 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:15.603 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:15.603 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.603 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.603 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.604 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:15.604 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:15.604 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:15.604 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.604 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.604 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.604 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:15.604 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:15.863 13:52:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:16.120 13:52:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66365 00:09:20.303 00:09:20.303 job0: (groupid=0, jobs=1): err= 0: pid=66387: Thu Jul 25 13:52:09 2024 00:09:20.303 read: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(243MiB/6006msec) 00:09:20.303 slat (usec): min=3, max=5631, avg=55.79, stdev=222.01 00:09:20.303 clat (usec): min=1625, max=15706, avg=8470.33, stdev=1617.69 00:09:20.303 lat (usec): min=1645, max=16121, avg=8526.12, stdev=1623.08 00:09:20.303 clat percentiles (usec): 00:09:20.303 | 1.00th=[ 4359], 5.00th=[ 6128], 10.00th=[ 7046], 20.00th=[ 7635], 00:09:20.303 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8455], 00:09:20.303 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[10290], 95.00th=[12256], 00:09:20.303 | 99.00th=[13435], 99.50th=[14091], 99.90th=[15008], 99.95th=[15270], 00:09:20.303 | 99.99th=[15533] 00:09:20.303 bw ( KiB/s): min=13768, max=25736, per=51.80%, avg=21459.75, stdev=4159.07, samples=12 00:09:20.303 iops : min= 3442, max= 6434, avg=5364.92, stdev=1039.77, samples=12 00:09:20.303 write: IOPS=5840, BW=22.8MiB/s (23.9MB/s)(126MiB/5524msec); 0 zone resets 00:09:20.303 slat (usec): min=9, max=2307, avg=66.35, stdev=144.46 00:09:20.303 clat (usec): min=731, max=15757, avg=7278.26, stdev=1428.64 00:09:20.303 lat (usec): min=880, max=15777, avg=7344.60, stdev=1434.19 00:09:20.303 clat percentiles (usec): 00:09:20.303 | 1.00th=[ 3392], 5.00th=[ 4293], 10.00th=[ 5145], 20.00th=[ 6652], 00:09:20.303 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:09:20.303 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 9110], 00:09:20.303 | 99.00th=[11600], 99.50th=[12125], 99.90th=[13173], 99.95th=[13566], 00:09:20.303 | 99.99th=[13960] 00:09:20.303 bw ( KiB/s): min=14032, max=25376, per=91.90%, avg=21470.25, stdev=3755.86, samples=12 00:09:20.303 iops : min= 3508, max= 6344, avg=5367.50, stdev=938.95, samples=12 00:09:20.303 lat (usec) : 750=0.01%, 1000=0.01% 00:09:20.303 lat (msec) : 2=0.04%, 4=1.40%, 10=90.13%, 20=8.43% 00:09:20.303 cpu : usr=6.13%, sys=25.42%, ctx=5726, majf=0, minf=84 00:09:20.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:20.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.303 issued rwts: total=62202,32265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.303 00:09:20.303 Run status group 0 (all jobs): 00:09:20.303 READ: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=243MiB (255MB), run=6006-6006msec 00:09:20.303 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=126MiB (132MB), run=5524-5524msec 00:09:20.303 00:09:20.303 Disk stats (read/write): 00:09:20.303 nvme0n1: ios=61291/31639, merge=0/0, ticks=493075/212639, in_queue=705714, util=98.63% 00:09:20.303 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:20.561 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:20.818 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:20.819 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66470 00:09:20.819 13:52:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:20.819 [global] 00:09:20.819 thread=1 00:09:20.819 invalidate=1 00:09:20.819 rw=randrw 00:09:20.819 time_based=1 00:09:20.819 runtime=6 00:09:20.819 ioengine=libaio 00:09:20.819 direct=1 00:09:20.819 bs=4096 00:09:20.819 iodepth=128 00:09:20.819 norandommap=0 00:09:20.819 numjobs=1 00:09:20.819 00:09:20.819 verify_dump=1 00:09:20.819 verify_backlog=512 00:09:20.819 verify_state_save=0 00:09:20.819 do_verify=1 00:09:20.819 verify=crc32c-intel 00:09:20.819 [job0] 00:09:20.819 filename=/dev/nvme0n1 00:09:20.819 Could not set queue depth (nvme0n1) 00:09:21.076 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.076 fio-3.35 00:09:21.076 Starting 1 thread 00:09:22.008 13:52:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:22.265 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:22.584 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:22.857 13:52:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66470 00:09:27.048 00:09:27.048 job0: (groupid=0, jobs=1): err= 0: pid=66491: Thu Jul 25 13:52:16 2024 00:09:27.048 read: IOPS=11.2k, BW=43.7MiB/s (45.9MB/s)(263MiB/6007msec) 00:09:27.048 slat (usec): min=2, max=6884, avg=45.06, stdev=200.50 00:09:27.048 clat (usec): min=300, max=24578, avg=7882.21, stdev=2383.39 00:09:27.048 lat (usec): min=309, max=24593, avg=7927.28, stdev=2392.65 00:09:27.048 clat percentiles (usec): 00:09:27.048 | 1.00th=[ 1778], 5.00th=[ 3687], 10.00th=[ 4817], 20.00th=[ 6587], 00:09:27.048 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8225], 00:09:27.048 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[10421], 95.00th=[12387], 00:09:27.048 | 99.00th=[15401], 99.50th=[16712], 99.90th=[19006], 99.95th=[20055], 00:09:27.048 | 99.99th=[22676] 00:09:27.048 bw ( KiB/s): min=11272, max=39456, per=52.03%, avg=23301.33, stdev=6951.25, samples=12 00:09:27.048 iops : min= 2818, max= 9864, avg=5825.33, stdev=1737.81, samples=12 00:09:27.048 write: IOPS=6442, BW=25.2MiB/s (26.4MB/s)(137MiB/5428msec); 0 zone resets 00:09:27.048 slat (usec): min=3, max=5225, avg=55.21, stdev=128.76 00:09:27.048 clat (usec): min=278, max=22518, avg=6637.56, stdev=2165.89 00:09:27.048 lat (usec): min=306, max=22553, avg=6692.77, stdev=2174.02 00:09:27.048 clat percentiles (usec): 00:09:27.048 | 1.00th=[ 1385], 5.00th=[ 3064], 10.00th=[ 3818], 20.00th=[ 4686], 00:09:27.048 | 30.00th=[ 5866], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7373], 00:09:27.048 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8291], 95.00th=[ 8979], 00:09:27.048 | 99.00th=[14484], 99.50th=[15270], 99.90th=[17433], 99.95th=[18220], 00:09:27.048 | 99.99th=[20841] 00:09:27.048 bw ( KiB/s): min=11440, max=38600, per=90.31%, avg=23272.00, stdev=6655.77, samples=12 00:09:27.048 iops : min= 2860, max= 9650, avg=5818.00, stdev=1663.94, samples=12 00:09:27.048 lat (usec) : 500=0.04%, 750=0.13%, 1000=0.16% 00:09:27.048 lat (msec) : 2=1.41%, 4=6.33%, 10=83.62%, 20=8.28%, 50=0.04% 00:09:27.048 cpu : usr=6.26%, sys=24.81%, ctx=6185, majf=0, minf=96 00:09:27.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:27.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.048 issued rwts: total=67257,34970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.048 00:09:27.048 Run status group 0 (all jobs): 00:09:27.048 READ: bw=43.7MiB/s (45.9MB/s), 43.7MiB/s-43.7MiB/s (45.9MB/s-45.9MB/s), io=263MiB (275MB), run=6007-6007msec 00:09:27.048 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=137MiB (143MB), run=5428-5428msec 00:09:27.048 00:09:27.048 Disk stats (read/write): 00:09:27.048 nvme0n1: ios=66367/34414, merge=0/0, ticks=497630/211577, in_queue=709207, util=98.68% 00:09:27.048 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:27.306 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.306 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:27.306 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:27.306 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.306 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:27.306 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.306 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:27.306 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.564 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.564 rmmod nvme_tcp 00:09:27.822 rmmod nvme_fabrics 00:09:27.822 rmmod nvme_keyring 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66268 ']' 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66268 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 66268 ']' 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 66268 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66268 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66268' 00:09:27.822 killing process with pid 66268 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 66268 00:09:27.822 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 66268 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:28.081 ************************************ 00:09:28.081 END TEST nvmf_target_multipath 00:09:28.081 ************************************ 00:09:28.081 00:09:28.081 real 0m20.056s 00:09:28.081 user 1m15.943s 00:09:28.081 sys 0m10.226s 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.081 13:52:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.081 ************************************ 00:09:28.081 START TEST nvmf_zcopy 00:09:28.081 ************************************ 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.081 * Looking for test storage... 00:09:28.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.081 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.082 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:28.340 Cannot find device "nvmf_tgt_br" 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.340 Cannot find device "nvmf_tgt_br2" 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:28.340 Cannot find device "nvmf_tgt_br" 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:28.340 Cannot find device "nvmf_tgt_br2" 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:28.340 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:28.341 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:28.341 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:28.341 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:28.341 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:28.341 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:28.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:09:28.599 00:09:28.599 --- 10.0.0.2 ping statistics --- 00:09:28.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.599 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:28.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:28.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:09:28.599 00:09:28.599 --- 10.0.0.3 ping statistics --- 00:09:28.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.599 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:28.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:28.599 00:09:28.599 --- 10.0.0.1 ping statistics --- 00:09:28.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.599 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=66760 00:09:28.599 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 66760 00:09:28.600 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:28.600 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 66760 ']' 00:09:28.600 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.600 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.600 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.600 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.600 13:52:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.600 [2024-07-25 13:52:17.519546] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:09:28.600 [2024-07-25 13:52:17.519951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.858 [2024-07-25 13:52:17.658110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.858 [2024-07-25 13:52:17.777549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.858 [2024-07-25 13:52:17.777611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.858 [2024-07-25 13:52:17.777624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.858 [2024-07-25 13:52:17.777632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.858 [2024-07-25 13:52:17.777640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.858 [2024-07-25 13:52:17.777670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.858 [2024-07-25 13:52:17.830502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.792 [2024-07-25 13:52:18.590981] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.792 [2024-07-25 13:52:18.611108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.792 malloc0 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:29.792 { 00:09:29.792 "params": { 00:09:29.792 "name": "Nvme$subsystem", 00:09:29.792 "trtype": "$TEST_TRANSPORT", 00:09:29.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.792 "adrfam": "ipv4", 00:09:29.792 "trsvcid": "$NVMF_PORT", 00:09:29.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.792 "hdgst": ${hdgst:-false}, 00:09:29.792 "ddgst": ${ddgst:-false} 00:09:29.792 }, 00:09:29.792 "method": "bdev_nvme_attach_controller" 00:09:29.792 } 00:09:29.792 EOF 00:09:29.792 )") 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:29.792 13:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:29.792 "params": { 00:09:29.792 "name": "Nvme1", 00:09:29.792 "trtype": "tcp", 00:09:29.792 "traddr": "10.0.0.2", 00:09:29.792 "adrfam": "ipv4", 00:09:29.792 "trsvcid": "4420", 00:09:29.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.792 "hdgst": false, 00:09:29.792 "ddgst": false 00:09:29.792 }, 00:09:29.792 "method": "bdev_nvme_attach_controller" 00:09:29.792 }' 00:09:29.792 [2024-07-25 13:52:18.704272] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:09:29.792 [2024-07-25 13:52:18.704378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66793 ] 00:09:30.050 [2024-07-25 13:52:18.841614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.050 [2024-07-25 13:52:18.996650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.050 [2024-07-25 13:52:19.065431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:30.334 Running I/O for 10 seconds... 00:09:40.299 00:09:40.299 Latency(us) 00:09:40.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:40.299 Verification LBA range: start 0x0 length 0x1000 00:09:40.299 Nvme1n1 : 10.02 5681.93 44.39 0.00 0.00 22456.28 3232.12 32410.53 00:09:40.299 =================================================================================================================== 00:09:40.299 Total : 5681.93 44.39 0.00 0.00 22456.28 3232.12 32410.53 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=66909 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:40.558 { 00:09:40.558 "params": { 00:09:40.558 "name": "Nvme$subsystem", 00:09:40.558 "trtype": "$TEST_TRANSPORT", 00:09:40.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.558 "adrfam": "ipv4", 00:09:40.558 "trsvcid": "$NVMF_PORT", 00:09:40.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.558 "hdgst": ${hdgst:-false}, 00:09:40.558 "ddgst": ${ddgst:-false} 00:09:40.558 }, 00:09:40.558 "method": "bdev_nvme_attach_controller" 00:09:40.558 } 00:09:40.558 EOF 00:09:40.558 )") 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:40.558 [2024-07-25 13:52:29.449795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.449839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:40.558 13:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:40.558 "params": { 00:09:40.558 "name": "Nvme1", 00:09:40.558 "trtype": "tcp", 00:09:40.558 "traddr": "10.0.0.2", 00:09:40.558 "adrfam": "ipv4", 00:09:40.558 "trsvcid": "4420", 00:09:40.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.558 "hdgst": false, 00:09:40.558 "ddgst": false 00:09:40.558 }, 00:09:40.558 "method": "bdev_nvme_attach_controller" 00:09:40.558 }' 00:09:40.558 [2024-07-25 13:52:29.461782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.461824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.469771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.469808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.481783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.481825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.489779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.489817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.492174] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:09:40.558 [2024-07-25 13:52:29.492244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66909 ] 00:09:40.558 [2024-07-25 13:52:29.501787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.501824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.509790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.509831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.517784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.517824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.525780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.525818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.537791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.537828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.549795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.549834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.561815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.561859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.573807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.573847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.558 [2024-07-25 13:52:29.585812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.558 [2024-07-25 13:52:29.585853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.597813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.816 [2024-07-25 13:52:29.597859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.609813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.816 [2024-07-25 13:52:29.609851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.621818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.816 [2024-07-25 13:52:29.621851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.629939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.816 [2024-07-25 13:52:29.633837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.816 [2024-07-25 13:52:29.633872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.645828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.816 [2024-07-25 13:52:29.645872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.657830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.816 [2024-07-25 13:52:29.657883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.669831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.816 [2024-07-25 13:52:29.669870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.681834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.816 [2024-07-25 13:52:29.681876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.816 [2024-07-25 13:52:29.693841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.693884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.705839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.705882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.717839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.717881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.729842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.729884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.741851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.741896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.748694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.817 [2024-07-25 13:52:29.753841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.753873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.765855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.765906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.773856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.773897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.785864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.785909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.797867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.797911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.809870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.809914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.810375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:40.817 [2024-07-25 13:52:29.821877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.821919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.833885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.833933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.817 [2024-07-25 13:52:29.845874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.817 [2024-07-25 13:52:29.845916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 [2024-07-25 13:52:29.857887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.857936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 [2024-07-25 13:52:29.869914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.869962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 [2024-07-25 13:52:29.881978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.882032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 [2024-07-25 13:52:29.893995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.894069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 [2024-07-25 13:52:29.905961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.906012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 [2024-07-25 13:52:29.917956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.918011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 Running I/O for 5 seconds... 00:09:41.074 [2024-07-25 13:52:29.929966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.930013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 [2024-07-25 13:52:29.948901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.948955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.074 [2024-07-25 13:52:29.963480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.074 [2024-07-25 13:52:29.963538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:29.980055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:29.980117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:29.996971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:29.997034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:30.011908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:30.011964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:30.028288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:30.028363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:30.044674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:30.044734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:30.061290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:30.061362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:30.077280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:30.077352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:30.086653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:30.086709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.075 [2024-07-25 13:52:30.103419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.075 [2024-07-25 13:52:30.103483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.119840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.119900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.136189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.136248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.152741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.152799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.169026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.169083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.185475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.185538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.202264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.202346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.220267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.220344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.234277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.234348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.250655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.250723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.260927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.260984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.277234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.277316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.293632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.293680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.333 [2024-07-25 13:52:30.309935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.333 [2024-07-25 13:52:30.309988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.334 [2024-07-25 13:52:30.320411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.334 [2024-07-25 13:52:30.320458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.334 [2024-07-25 13:52:30.335020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.334 [2024-07-25 13:52:30.335077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.334 [2024-07-25 13:52:30.345532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.334 [2024-07-25 13:52:30.345580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.334 [2024-07-25 13:52:30.359971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.334 [2024-07-25 13:52:30.360032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.592 [2024-07-25 13:52:30.375635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.592 [2024-07-25 13:52:30.375695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.592 [2024-07-25 13:52:30.385840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.592 [2024-07-25 13:52:30.385899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.592 [2024-07-25 13:52:30.401138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.592 [2024-07-25 13:52:30.401202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.592 [2024-07-25 13:52:30.418179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.592 [2024-07-25 13:52:30.418238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.592 [2024-07-25 13:52:30.434510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.592 [2024-07-25 13:52:30.434568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.592 [2024-07-25 13:52:30.452183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.452244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.467601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.467665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.485136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.485197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.500725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.500789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.510797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.510854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.527279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.527351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.543080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.543142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.558130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.558199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.568130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.568181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.583027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.583090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.596749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.596806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.612553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.612624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.593 [2024-07-25 13:52:30.622722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.593 [2024-07-25 13:52:30.622778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.638791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.638834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.648855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.648899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.660480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.660534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.675800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.675855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.692181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.692235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.702048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.702101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.717538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.717597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.734786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.734845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.751906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.751960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.766418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.766476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.782386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.782443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.799321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.799380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.815676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.815734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.832691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.832749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.848812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.848866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.866134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.866192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.851 [2024-07-25 13:52:30.881215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.851 [2024-07-25 13:52:30.881270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.109 [2024-07-25 13:52:30.897106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.109 [2024-07-25 13:52:30.897160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.109 [2024-07-25 13:52:30.914285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.109 [2024-07-25 13:52:30.914370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.109 [2024-07-25 13:52:30.929969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.109 [2024-07-25 13:52:30.930026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.109 [2024-07-25 13:52:30.939373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.109 [2024-07-25 13:52:30.939422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.109 [2024-07-25 13:52:30.955618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.109 [2024-07-25 13:52:30.955683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.109 [2024-07-25 13:52:30.965746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.109 [2024-07-25 13:52:30.965793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.109 [2024-07-25 13:52:30.976912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.109 [2024-07-25 13:52:30.976963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.109 [2024-07-25 13:52:30.993477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.109 [2024-07-25 13:52:30.993546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.003237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.003290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.019432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.019503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.037419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.037479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.052092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.052145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.070038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.070100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.085334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.085390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.095426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.095488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.110829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.110886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.110 [2024-07-25 13:52:31.127227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.110 [2024-07-25 13:52:31.127288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.143930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.143988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.161722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.161783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.177468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.177524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.195610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.195661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.210851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.210907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.220484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.220533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.235873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.235935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.245979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.246029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.260485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.260556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.276329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.276386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.285895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.285949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.300590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.300648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.315881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.315938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.325725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.325774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.341563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.341620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.359141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.359189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.375204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.375249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.368 [2024-07-25 13:52:31.392589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.368 [2024-07-25 13:52:31.392634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.408628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.408675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.425741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.425799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.443543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.443600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.458673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.458723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.468060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.468112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.483754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.483813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.500537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.500607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.516717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.516775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.533083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.533140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.551551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.551610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.566358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.566414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.582038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.582096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.599842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.599899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.614843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.614899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.624429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.624490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.639510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.639581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.626 [2024-07-25 13:52:31.655625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.626 [2024-07-25 13:52:31.655674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.673493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.673551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.688963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.689023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.699325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.699401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.714407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.714465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.730511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.730568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.748026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.748106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.763066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.763129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.779506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.779560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.797785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.797845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.812441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.884 [2024-07-25 13:52:31.812508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.884 [2024-07-25 13:52:31.828167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-07-25 13:52:31.828220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-07-25 13:52:31.844716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-07-25 13:52:31.844772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-07-25 13:52:31.861955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-07-25 13:52:31.862016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-07-25 13:52:31.879354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-07-25 13:52:31.879412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-07-25 13:52:31.894064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-07-25 13:52:31.894123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 [2024-07-25 13:52:31.909730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-07-25 13:52:31.909793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:31.926519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:31.926588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:31.944507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:31.944574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:31.960284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:31.960364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:31.971296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:31.971375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:31.984938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:31.985003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.001589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.001671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.018449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.018509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.029402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.029461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.046144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.046228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.063375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.063436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.078479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.078542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.088636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.088688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.103334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.103390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.119588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.119645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.135866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.135931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.154232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.154296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.143 [2024-07-25 13:52:32.169502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.143 [2024-07-25 13:52:32.169568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.178787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.178838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.195464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.195523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.211668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.211733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.229700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.229766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.244694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.244752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.254569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.254625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.270956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.271019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.288850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.288915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.303784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.303847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.313627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.313682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.329115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.329172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.345659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.345717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.361451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.361509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.371132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.371182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.387225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.387282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.404484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.404542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.419854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.419908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.402 [2024-07-25 13:52:32.429040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.402 [2024-07-25 13:52:32.429090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.445201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.445245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.462233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.462283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.478814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.478865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.488765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.488806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.503601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.503648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.513987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.514030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.525317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.525360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.541552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.541599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.557102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.557158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.572287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.572350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.582099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.582149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.598082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.598137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.616157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.616211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.631417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.631478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.641531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.641582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.657621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.657674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.675071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.675119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.661 [2024-07-25 13:52:32.690468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.661 [2024-07-25 13:52:32.690517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.699925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.699974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.715796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.715851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.732911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.732968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.749343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.749394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.768267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.768337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.783294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.783360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.800963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.801022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.816940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.816996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.834663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.834721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.850750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.850801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.867390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.867449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.884157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.884212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.900961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.901018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.917134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.917190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.934790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.934847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.920 [2024-07-25 13:52:32.949666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.920 [2024-07-25 13:52:32.949725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:32.965725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:32.965779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:32.983588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:32.983648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:32.998828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:32.998884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.008872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.008920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.024162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.024215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.040398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.040449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.057912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.057965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.073935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.073988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.091257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.091322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.107678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.107735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.124996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.125049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.142968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.143022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.158095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.158148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.173789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.173842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.191107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.191160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.180 [2024-07-25 13:52:33.207338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.180 [2024-07-25 13:52:33.207392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.223868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.223921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.240527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.240580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.258631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.258692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.273677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.273728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.283909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.283958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.299334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.299387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.315969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.316034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.332741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.332796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.348729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.348787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.358496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.358545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.374524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.374579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.390245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.390323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.406834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.406891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.423566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.423631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.439902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.439960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.438 [2024-07-25 13:52:33.457068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.438 [2024-07-25 13:52:33.457135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.473724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.473778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.490426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.490481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.506717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.506781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.524906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.524967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.540550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.540612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.557332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.557373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.573683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.573731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.591953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.592008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.606912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.606965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.617381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.617433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.633269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.633338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.648533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.648600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.659032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.659097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.673835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.673899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.692005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.692068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.707290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.707361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.696 [2024-07-25 13:52:33.717088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.696 [2024-07-25 13:52:33.717144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.731981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.732043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.748929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.748997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.767313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.767371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.782444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.782505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.792510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.792568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.808710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.808771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.825024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.825086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.842927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.842993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.858028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.858096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.875978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.876044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.891366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.891435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.901692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.901758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.918024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.918090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.932923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.932991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.948989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.949054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.966999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.954 [2024-07-25 13:52:33.967066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.954 [2024-07-25 13:52:33.981181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.955 [2024-07-25 13:52:33.981250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:33.997983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:33.998051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.013979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.014045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.030804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.030877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.047851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.047914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.064415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.064489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.081522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.081584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.097800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.097863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.114460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.114524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.130327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.130411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.146642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.146707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.163397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.163461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.180171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.180236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.196788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.196852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.213530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.213591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.212 [2024-07-25 13:52:34.229429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.212 [2024-07-25 13:52:34.229492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.247984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.248052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.262586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.262649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.279409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.279471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.295234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.295331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.305346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.305405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.323473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.323542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.338908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.338977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.354817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.354880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.372406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.372479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.388253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.388323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.404992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.405056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.421264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.421339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.438657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.438719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.454191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.454259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.470180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.470243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.470 [2024-07-25 13:52:34.488623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.470 [2024-07-25 13:52:34.488683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.503487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.503550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.519000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.519062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.535082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.535150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.544755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.544817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.560963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.561026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.579734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.579798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.594976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.595041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.604953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.605014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.620661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.620721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.637976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.638038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.654208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.654273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.666999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.667068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.682972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.683039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.700562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.700637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.717093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.717154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.733291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.733372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.728 [2024-07-25 13:52:34.746918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.728 [2024-07-25 13:52:34.746979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.765841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.765931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.783407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.783502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.798848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.798917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.813954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.814018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.830349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.830409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.847404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.847463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.863529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.863614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.880493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.880557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.896138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.896202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.912408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.912481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.928744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.928809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.937728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.937781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 00:09:45.987 Latency(us) 00:09:45.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.987 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:45.987 Nvme1n1 : 5.01 11229.12 87.73 0.00 0.00 11386.63 4617.31 25022.84 00:09:45.987 =================================================================================================================== 00:09:45.987 Total : 11229.12 87.73 0.00 0.00 11386.63 4617.31 25022.84 00:09:45.987 [2024-07-25 13:52:34.949352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.949403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.961332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.961386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.973340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.973391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.985338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.985403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:34.997336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:34.997388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-07-25 13:52:35.009346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-07-25 13:52:35.009401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.021352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.021408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.033357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.033413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.045358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.045414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.057352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.057400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.069348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.069395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.081370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.081426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.093364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.093412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.105362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.105408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.117367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.117412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.129370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.129418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.141404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.141460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.153368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.153411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.165381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.165428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 [2024-07-25 13:52:35.177415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.246 [2024-07-25 13:52:35.177469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.246 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (66909) - No such process 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 66909 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.246 delay0 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.246 13:52:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:46.504 [2024-07-25 13:52:35.383882] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:53.063 Initializing NVMe Controllers 00:09:53.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.063 Initialization complete. Launching workers. 00:09:53.063 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 95 00:09:53.063 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 382, failed to submit 33 00:09:53.063 success 252, unsuccess 130, failed 0 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:53.063 rmmod nvme_tcp 00:09:53.063 rmmod nvme_fabrics 00:09:53.063 rmmod nvme_keyring 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 66760 ']' 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 66760 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 66760 ']' 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 66760 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66760 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66760' 00:09:53.063 killing process with pid 66760 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 66760 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 66760 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:53.063 00:09:53.063 real 0m24.940s 00:09:53.063 user 0m40.422s 00:09:53.063 sys 0m7.034s 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.063 ************************************ 00:09:53.063 END TEST nvmf_zcopy 00:09:53.063 ************************************ 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.063 ************************************ 00:09:53.063 START TEST nvmf_nmic 00:09:53.063 ************************************ 00:09:53.063 13:52:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:53.063 * Looking for test storage... 00:09:53.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.063 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.064 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.064 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:53.064 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.064 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:53.064 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:53.323 Cannot find device "nvmf_tgt_br" 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.323 Cannot find device "nvmf_tgt_br2" 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:53.323 Cannot find device "nvmf_tgt_br" 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:53.323 Cannot find device "nvmf_tgt_br2" 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:53.323 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:53.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:09:53.582 00:09:53.582 --- 10.0.0.2 ping statistics --- 00:09:53.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.582 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:53.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:53.582 00:09:53.582 --- 10.0.0.3 ping statistics --- 00:09:53.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.582 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:53.582 00:09:53.582 --- 10.0.0.1 ping statistics --- 00:09:53.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.582 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67231 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.582 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67231 00:09:53.583 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 67231 ']' 00:09:53.583 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.583 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.583 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.583 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.583 13:52:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.583 [2024-07-25 13:52:42.508503] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:09:53.583 [2024-07-25 13:52:42.508644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.841 [2024-07-25 13:52:42.650732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.841 [2024-07-25 13:52:42.772661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.841 [2024-07-25 13:52:42.772725] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.841 [2024-07-25 13:52:42.772737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.841 [2024-07-25 13:52:42.772746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.841 [2024-07-25 13:52:42.772754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.841 [2024-07-25 13:52:42.773161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.841 [2024-07-25 13:52:42.773496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.841 [2024-07-25 13:52:42.773571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.841 [2024-07-25 13:52:42.773576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.841 [2024-07-25 13:52:42.826478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.408 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.408 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:54.408 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.408 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.408 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.666 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.666 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 [2024-07-25 13:52:43.451650] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 Malloc0 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 [2024-07-25 13:52:43.518894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.667 test case1: single bdev can't be used in multiple subsystems 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 [2024-07-25 13:52:43.542751] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:54.667 [2024-07-25 13:52:43.542792] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:54.667 [2024-07-25 13:52:43.542805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.667 request: 00:09:54.667 { 00:09:54.667 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:54.667 "namespace": { 00:09:54.667 "bdev_name": "Malloc0", 00:09:54.667 "no_auto_visible": false 00:09:54.667 }, 00:09:54.667 "method": "nvmf_subsystem_add_ns", 00:09:54.667 "req_id": 1 00:09:54.667 } 00:09:54.667 Got JSON-RPC error response 00:09:54.667 response: 00:09:54.667 { 00:09:54.667 "code": -32602, 00:09:54.667 "message": "Invalid parameters" 00:09:54.667 } 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:54.667 Adding namespace failed - expected result. 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:54.667 test case2: host connect to nvmf target in multiple paths 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 [2024-07-25 13:52:43.558919] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid=71427938-e211-49fa-b6ad-486cdab0bd89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.667 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid=71427938-e211-49fa-b6ad-486cdab0bd89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:54.924 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.924 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:54.925 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.925 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:54.925 13:52:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:56.822 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:56.822 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:56.822 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.822 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:56.822 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.822 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:56.822 13:52:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:56.822 [global] 00:09:56.822 thread=1 00:09:56.822 invalidate=1 00:09:56.822 rw=write 00:09:56.822 time_based=1 00:09:56.822 runtime=1 00:09:56.822 ioengine=libaio 00:09:56.822 direct=1 00:09:56.822 bs=4096 00:09:56.822 iodepth=1 00:09:56.822 norandommap=0 00:09:56.822 numjobs=1 00:09:56.822 00:09:56.822 verify_dump=1 00:09:56.822 verify_backlog=512 00:09:56.822 verify_state_save=0 00:09:56.822 do_verify=1 00:09:56.822 verify=crc32c-intel 00:09:57.080 [job0] 00:09:57.080 filename=/dev/nvme0n1 00:09:57.080 Could not set queue depth (nvme0n1) 00:09:57.080 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.080 fio-3.35 00:09:57.080 Starting 1 thread 00:09:58.454 00:09:58.454 job0: (groupid=0, jobs=1): err= 0: pid=67324: Thu Jul 25 13:52:47 2024 00:09:58.454 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:58.454 slat (nsec): min=12585, max=42780, avg=14576.63, stdev=2621.84 00:09:58.454 clat (usec): min=136, max=364, avg=169.81, stdev=15.90 00:09:58.454 lat (usec): min=151, max=377, avg=184.39, stdev=16.23 00:09:58.454 clat percentiles (usec): 00:09:58.454 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:09:58.454 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:09:58.454 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:09:58.454 | 99.00th=[ 227], 99.50th=[ 249], 99.90th=[ 273], 99.95th=[ 293], 00:09:58.454 | 99.99th=[ 363] 00:09:58.454 write: IOPS=3345, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec); 0 zone resets 00:09:58.454 slat (usec): min=14, max=188, avg=21.26, stdev= 5.58 00:09:58.454 clat (usec): min=85, max=473, avg=105.02, stdev=13.76 00:09:58.454 lat (usec): min=105, max=494, avg=126.28, stdev=16.27 00:09:58.454 clat percentiles (usec): 00:09:58.454 | 1.00th=[ 89], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:09:58.454 | 30.00th=[ 98], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 105], 00:09:58.454 | 70.00th=[ 109], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 127], 00:09:58.454 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 192], 99.95th=[ 243], 00:09:58.454 | 99.99th=[ 474] 00:09:58.454 bw ( KiB/s): min=13744, max=13744, per=100.00%, avg=13744.00, stdev= 0.00, samples=1 00:09:58.454 iops : min= 3436, max= 3436, avg=3436.00, stdev= 0.00, samples=1 00:09:58.454 lat (usec) : 100=18.94%, 250=80.81%, 500=0.25% 00:09:58.454 cpu : usr=2.70%, sys=8.70%, ctx=6421, majf=0, minf=2 00:09:58.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.454 issued rwts: total=3072,3349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.454 00:09:58.454 Run status group 0 (all jobs): 00:09:58.454 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:58.454 WRITE: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=13.1MiB (13.7MB), run=1001-1001msec 00:09:58.454 00:09:58.454 Disk stats (read/write): 00:09:58.454 nvme0n1: ios=2761/3072, merge=0/0, ticks=487/359, in_queue=846, util=91.68% 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.454 rmmod nvme_tcp 00:09:58.454 rmmod nvme_fabrics 00:09:58.454 rmmod nvme_keyring 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67231 ']' 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67231 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 67231 ']' 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 67231 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67231 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.454 killing process with pid 67231 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67231' 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 67231 00:09:58.454 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 67231 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:58.713 00:09:58.713 real 0m5.605s 00:09:58.713 user 0m17.868s 00:09:58.713 sys 0m2.137s 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.713 ************************************ 00:09:58.713 END TEST nvmf_nmic 00:09:58.713 ************************************ 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.713 13:52:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.713 ************************************ 00:09:58.714 START TEST nvmf_fio_target 00:09:58.714 ************************************ 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:58.714 * Looking for test storage... 00:09:58.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.714 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:58.973 Cannot find device "nvmf_tgt_br" 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.973 Cannot find device "nvmf_tgt_br2" 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:58.973 Cannot find device "nvmf_tgt_br" 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:58.973 Cannot find device "nvmf_tgt_br2" 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.973 13:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.973 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:59.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:09:59.231 00:09:59.231 --- 10.0.0.2 ping statistics --- 00:09:59.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.231 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:59.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:09:59.231 00:09:59.231 --- 10.0.0.3 ping statistics --- 00:09:59.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.231 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:09:59.231 00:09:59.231 --- 10.0.0.1 ping statistics --- 00:09:59.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.231 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67506 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67506 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 67506 ']' 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.231 13:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.231 [2024-07-25 13:52:48.168560] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:09:59.231 [2024-07-25 13:52:48.168697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.489 [2024-07-25 13:52:48.315381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.489 [2024-07-25 13:52:48.433674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.489 [2024-07-25 13:52:48.433739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.489 [2024-07-25 13:52:48.433752] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.489 [2024-07-25 13:52:48.433761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.489 [2024-07-25 13:52:48.433768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.489 [2024-07-25 13:52:48.433940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.489 [2024-07-25 13:52:48.434806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.489 [2024-07-25 13:52:48.434964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.489 [2024-07-25 13:52:48.434969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.489 [2024-07-25 13:52:48.487980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:00.055 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.055 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:00.055 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.055 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.055 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.314 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.314 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:00.572 [2024-07-25 13:52:49.393540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.572 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.829 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:00.829 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.087 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:01.087 13:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.346 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:01.346 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.605 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:01.605 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:01.863 13:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.122 13:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:02.122 13:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.381 13:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:02.381 13:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.639 13:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:02.639 13:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:03.206 13:52:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.206 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:03.206 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.464 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:03.464 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:03.723 13:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.981 [2024-07-25 13:52:52.989879] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.239 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:04.498 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:04.498 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid=71427938-e211-49fa-b6ad-486cdab0bd89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.758 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:04.758 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:04.758 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.758 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:04.758 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:04.758 13:52:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:06.659 13:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:06.659 13:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:06.659 13:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.659 13:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:06.659 13:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.659 13:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:06.659 13:52:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:06.659 [global] 00:10:06.659 thread=1 00:10:06.659 invalidate=1 00:10:06.659 rw=write 00:10:06.659 time_based=1 00:10:06.659 runtime=1 00:10:06.659 ioengine=libaio 00:10:06.659 direct=1 00:10:06.659 bs=4096 00:10:06.659 iodepth=1 00:10:06.659 norandommap=0 00:10:06.659 numjobs=1 00:10:06.659 00:10:06.918 verify_dump=1 00:10:06.918 verify_backlog=512 00:10:06.918 verify_state_save=0 00:10:06.918 do_verify=1 00:10:06.918 verify=crc32c-intel 00:10:06.918 [job0] 00:10:06.918 filename=/dev/nvme0n1 00:10:06.918 [job1] 00:10:06.918 filename=/dev/nvme0n2 00:10:06.918 [job2] 00:10:06.918 filename=/dev/nvme0n3 00:10:06.918 [job3] 00:10:06.918 filename=/dev/nvme0n4 00:10:06.918 Could not set queue depth (nvme0n1) 00:10:06.918 Could not set queue depth (nvme0n2) 00:10:06.918 Could not set queue depth (nvme0n3) 00:10:06.918 Could not set queue depth (nvme0n4) 00:10:06.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.918 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.918 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.918 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.918 fio-3.35 00:10:06.918 Starting 4 threads 00:10:08.323 00:10:08.323 job0: (groupid=0, jobs=1): err= 0: pid=67690: Thu Jul 25 13:52:57 2024 00:10:08.323 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:08.323 slat (nsec): min=12520, max=58935, avg=17363.66, stdev=5807.97 00:10:08.323 clat (usec): min=152, max=367, avg=239.00, stdev=35.23 00:10:08.323 lat (usec): min=165, max=392, avg=256.36, stdev=35.40 00:10:08.323 clat percentiles (usec): 00:10:08.323 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 210], 00:10:08.323 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:10:08.323 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 297], 00:10:08.323 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 359], 99.95th=[ 363], 00:10:08.323 | 99.99th=[ 367] 00:10:08.323 write: IOPS=2190, BW=8763KiB/s (8974kB/s)(8772KiB/1001msec); 0 zone resets 00:10:08.323 slat (usec): min=16, max=127, avg=25.02, stdev= 8.10 00:10:08.323 clat (usec): min=106, max=1792, avg=187.64, stdev=48.52 00:10:08.323 lat (usec): min=126, max=1811, avg=212.66, stdev=49.34 00:10:08.323 clat percentiles (usec): 00:10:08.323 | 1.00th=[ 117], 5.00th=[ 135], 10.00th=[ 145], 20.00th=[ 159], 00:10:08.323 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 194], 00:10:08.323 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 245], 00:10:08.323 | 99.00th=[ 277], 99.50th=[ 318], 99.90th=[ 412], 99.95th=[ 502], 00:10:08.323 | 99.99th=[ 1795] 00:10:08.323 bw ( KiB/s): min= 8192, max= 8192, per=24.01%, avg=8192.00, stdev= 0.00, samples=1 00:10:08.323 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:08.323 lat (usec) : 250=80.78%, 500=19.17%, 750=0.02% 00:10:08.323 lat (msec) : 2=0.02% 00:10:08.323 cpu : usr=2.10%, sys=6.90%, ctx=4242, majf=0, minf=5 00:10:08.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.323 issued rwts: total=2048,2193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.323 job1: (groupid=0, jobs=1): err= 0: pid=67691: Thu Jul 25 13:52:57 2024 00:10:08.323 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:08.323 slat (nsec): min=13730, max=57780, avg=18266.58, stdev=5004.26 00:10:08.323 clat (usec): min=145, max=342, avg=236.04, stdev=26.69 00:10:08.323 lat (usec): min=161, max=359, avg=254.30, stdev=27.00 00:10:08.323 clat percentiles (usec): 00:10:08.323 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 217], 00:10:08.323 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:10:08.323 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:10:08.323 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 334], 99.95th=[ 338], 00:10:08.323 | 99.99th=[ 343] 00:10:08.323 write: IOPS=2215, BW=8863KiB/s (9076kB/s)(8872KiB/1001msec); 0 zone resets 00:10:08.323 slat (nsec): min=19452, max=88629, avg=26944.91, stdev=6825.85 00:10:08.323 clat (usec): min=101, max=506, avg=185.31, stdev=28.25 00:10:08.323 lat (usec): min=127, max=529, avg=212.25, stdev=28.10 00:10:08.323 clat percentiles (usec): 00:10:08.323 | 1.00th=[ 123], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 163], 00:10:08.323 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:10:08.323 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 235], 00:10:08.323 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 293], 00:10:08.323 | 99.99th=[ 506] 00:10:08.323 bw ( KiB/s): min= 9024, max= 9024, per=26.45%, avg=9024.00, stdev= 0.00, samples=1 00:10:08.323 iops : min= 2256, max= 2256, avg=2256.00, stdev= 0.00, samples=1 00:10:08.323 lat (usec) : 250=85.91%, 500=14.06%, 750=0.02% 00:10:08.323 cpu : usr=2.40%, sys=7.00%, ctx=4267, majf=0, minf=5 00:10:08.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.323 issued rwts: total=2048,2218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.323 job2: (groupid=0, jobs=1): err= 0: pid=67692: Thu Jul 25 13:52:57 2024 00:10:08.323 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:08.323 slat (nsec): min=12523, max=55965, avg=15824.79, stdev=3058.60 00:10:08.323 clat (usec): min=163, max=2050, avg=243.47, stdev=55.61 00:10:08.323 lat (usec): min=176, max=2075, avg=259.29, stdev=56.01 00:10:08.323 clat percentiles (usec): 00:10:08.324 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 217], 00:10:08.324 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 247], 00:10:08.324 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:10:08.324 | 99.00th=[ 383], 99.50th=[ 433], 99.90th=[ 603], 99.95th=[ 758], 00:10:08.324 | 99.99th=[ 2057] 00:10:08.324 write: IOPS=2063, BW=8256KiB/s (8454kB/s)(8264KiB/1001msec); 0 zone resets 00:10:08.324 slat (usec): min=17, max=154, avg=26.71, stdev= 8.74 00:10:08.324 clat (usec): min=119, max=676, avg=196.74, stdev=33.19 00:10:08.324 lat (usec): min=141, max=697, avg=223.45, stdev=34.51 00:10:08.324 clat percentiles (usec): 00:10:08.324 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 172], 00:10:08.324 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 200], 00:10:08.324 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 249], 00:10:08.324 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 429], 99.95th=[ 668], 00:10:08.324 | 99.99th=[ 676] 00:10:08.324 bw ( KiB/s): min= 8224, max= 8296, per=24.21%, avg=8260.00, stdev=50.91, samples=2 00:10:08.324 iops : min= 2056, max= 2074, avg=2065.00, stdev=12.73, samples=2 00:10:08.324 lat (usec) : 250=79.92%, 500=19.86%, 750=0.17%, 1000=0.02% 00:10:08.324 lat (msec) : 4=0.02% 00:10:08.324 cpu : usr=1.90%, sys=6.70%, ctx=4114, majf=0, minf=9 00:10:08.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.324 issued rwts: total=2048,2066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.324 job3: (groupid=0, jobs=1): err= 0: pid=67693: Thu Jul 25 13:52:57 2024 00:10:08.324 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:08.324 slat (nsec): min=12405, max=41715, avg=15733.79, stdev=3026.10 00:10:08.324 clat (usec): min=154, max=1200, avg=244.95, stdev=45.52 00:10:08.324 lat (usec): min=169, max=1223, avg=260.68, stdev=45.78 00:10:08.324 clat percentiles (usec): 00:10:08.324 | 1.00th=[ 174], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 212], 00:10:08.324 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 251], 00:10:08.324 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 310], 00:10:08.324 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 652], 99.95th=[ 881], 00:10:08.324 | 99.99th=[ 1205] 00:10:08.324 write: IOPS=2057, BW=8232KiB/s (8429kB/s)(8240KiB/1001msec); 0 zone resets 00:10:08.324 slat (usec): min=17, max=156, avg=23.94, stdev= 6.82 00:10:08.324 clat (usec): min=106, max=2878, avg=198.66, stdev=76.06 00:10:08.324 lat (usec): min=126, max=2900, avg=222.60, stdev=76.65 00:10:08.324 clat percentiles (usec): 00:10:08.324 | 1.00th=[ 126], 5.00th=[ 141], 10.00th=[ 151], 20.00th=[ 167], 00:10:08.324 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 204], 00:10:08.324 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 258], 00:10:08.324 | 99.00th=[ 302], 99.50th=[ 338], 99.90th=[ 988], 99.95th=[ 1336], 00:10:08.324 | 99.99th=[ 2868] 00:10:08.324 bw ( KiB/s): min= 8192, max= 8192, per=24.01%, avg=8192.00, stdev= 0.00, samples=1 00:10:08.324 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:08.324 lat (usec) : 250=76.07%, 500=23.73%, 750=0.07%, 1000=0.05% 00:10:08.324 lat (msec) : 2=0.05%, 4=0.02% 00:10:08.324 cpu : usr=1.50%, sys=6.50%, ctx=4109, majf=0, minf=16 00:10:08.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.324 issued rwts: total=2048,2060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.324 00:10:08.324 Run status group 0 (all jobs): 00:10:08.324 READ: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:10:08.324 WRITE: bw=33.3MiB/s (34.9MB/s), 8232KiB/s-8863KiB/s (8429kB/s-9076kB/s), io=33.3MiB (35.0MB), run=1001-1001msec 00:10:08.324 00:10:08.324 Disk stats (read/write): 00:10:08.324 nvme0n1: ios=1646/2048, merge=0/0, ticks=445/407, in_queue=852, util=88.98% 00:10:08.324 nvme0n2: ios=1671/2048, merge=0/0, ticks=419/405, in_queue=824, util=88.03% 00:10:08.324 nvme0n3: ios=1542/2008, merge=0/0, ticks=394/417, in_queue=811, util=89.54% 00:10:08.324 nvme0n4: ios=1536/1999, merge=0/0, ticks=394/422, in_queue=816, util=89.79% 00:10:08.324 13:52:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:08.324 [global] 00:10:08.324 thread=1 00:10:08.324 invalidate=1 00:10:08.324 rw=randwrite 00:10:08.324 time_based=1 00:10:08.324 runtime=1 00:10:08.324 ioengine=libaio 00:10:08.324 direct=1 00:10:08.324 bs=4096 00:10:08.324 iodepth=1 00:10:08.324 norandommap=0 00:10:08.324 numjobs=1 00:10:08.324 00:10:08.324 verify_dump=1 00:10:08.324 verify_backlog=512 00:10:08.324 verify_state_save=0 00:10:08.324 do_verify=1 00:10:08.324 verify=crc32c-intel 00:10:08.324 [job0] 00:10:08.324 filename=/dev/nvme0n1 00:10:08.324 [job1] 00:10:08.324 filename=/dev/nvme0n2 00:10:08.324 [job2] 00:10:08.324 filename=/dev/nvme0n3 00:10:08.324 [job3] 00:10:08.324 filename=/dev/nvme0n4 00:10:08.324 Could not set queue depth (nvme0n1) 00:10:08.324 Could not set queue depth (nvme0n2) 00:10:08.324 Could not set queue depth (nvme0n3) 00:10:08.324 Could not set queue depth (nvme0n4) 00:10:08.324 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.324 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.324 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.324 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.324 fio-3.35 00:10:08.324 Starting 4 threads 00:10:09.702 00:10:09.702 job0: (groupid=0, jobs=1): err= 0: pid=67752: Thu Jul 25 13:52:58 2024 00:10:09.702 read: IOPS=1318, BW=5275KiB/s (5401kB/s)(5280KiB/1001msec) 00:10:09.702 slat (nsec): min=10851, max=56555, avg=19877.40, stdev=6973.29 00:10:09.702 clat (usec): min=168, max=774, avg=367.52, stdev=75.47 00:10:09.702 lat (usec): min=186, max=800, avg=387.40, stdev=78.22 00:10:09.702 clat percentiles (usec): 00:10:09.702 | 1.00th=[ 212], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 322], 00:10:09.702 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:10:09.702 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 502], 00:10:09.702 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 766], 99.95th=[ 775], 00:10:09.702 | 99.99th=[ 775] 00:10:09.702 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:09.702 slat (usec): min=13, max=191, avg=33.41, stdev=11.31 00:10:09.702 clat (usec): min=123, max=2259, avg=279.99, stdev=78.37 00:10:09.702 lat (usec): min=158, max=2288, avg=313.40, stdev=79.52 00:10:09.702 clat percentiles (usec): 00:10:09.702 | 1.00th=[ 143], 5.00th=[ 165], 10.00th=[ 188], 20.00th=[ 241], 00:10:09.702 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 289], 00:10:09.702 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 359], 95.00th=[ 383], 00:10:09.702 | 99.00th=[ 429], 99.50th=[ 445], 99.90th=[ 635], 99.95th=[ 2245], 00:10:09.702 | 99.99th=[ 2245] 00:10:09.702 bw ( KiB/s): min= 7440, max= 7440, per=24.04%, avg=7440.00, stdev= 0.00, samples=1 00:10:09.702 iops : min= 1860, max= 1860, avg=1860.00, stdev= 0.00, samples=1 00:10:09.702 lat (usec) : 250=14.67%, 500=82.84%, 750=2.35%, 1000=0.11% 00:10:09.702 lat (msec) : 4=0.04% 00:10:09.702 cpu : usr=1.90%, sys=6.10%, ctx=2857, majf=0, minf=11 00:10:09.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.702 issued rwts: total=1320,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.702 job1: (groupid=0, jobs=1): err= 0: pid=67753: Thu Jul 25 13:52:58 2024 00:10:09.702 read: IOPS=2151, BW=8607KiB/s (8814kB/s)(8616KiB/1001msec) 00:10:09.702 slat (nsec): min=13425, max=52574, avg=16978.11, stdev=4550.09 00:10:09.702 clat (usec): min=160, max=305, avg=221.06, stdev=19.71 00:10:09.702 lat (usec): min=175, max=325, avg=238.04, stdev=20.75 00:10:09.702 clat percentiles (usec): 00:10:09.702 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 204], 00:10:09.702 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:10:09.702 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 258], 00:10:09.702 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 297], 99.95th=[ 306], 00:10:09.702 | 99.99th=[ 306] 00:10:09.702 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:09.702 slat (nsec): min=18632, max=96943, avg=25701.26, stdev=6788.43 00:10:09.702 clat (usec): min=107, max=585, avg=161.06, stdev=22.09 00:10:09.702 lat (usec): min=128, max=617, avg=186.76, stdev=24.46 00:10:09.702 clat percentiles (usec): 00:10:09.702 | 1.00th=[ 122], 5.00th=[ 130], 10.00th=[ 137], 20.00th=[ 145], 00:10:09.702 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:10:09.702 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 196], 00:10:09.702 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 258], 99.95th=[ 260], 00:10:09.702 | 99.99th=[ 586] 00:10:09.702 bw ( KiB/s): min=10496, max=10496, per=33.91%, avg=10496.00, stdev= 0.00, samples=1 00:10:09.702 iops : min= 2624, max= 2624, avg=2624.00, stdev= 0.00, samples=1 00:10:09.702 lat (usec) : 250=96.27%, 500=3.71%, 750=0.02% 00:10:09.702 cpu : usr=2.20%, sys=7.70%, ctx=4714, majf=0, minf=14 00:10:09.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.702 issued rwts: total=2154,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.702 job2: (groupid=0, jobs=1): err= 0: pid=67754: Thu Jul 25 13:52:58 2024 00:10:09.702 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:09.702 slat (nsec): min=13050, max=62822, avg=16232.75, stdev=3703.04 00:10:09.702 clat (usec): min=198, max=702, avg=242.77, stdev=27.37 00:10:09.702 lat (usec): min=212, max=716, avg=259.01, stdev=27.75 00:10:09.702 clat percentiles (usec): 00:10:09.702 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:10:09.702 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:10:09.702 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:10:09.702 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 660], 99.95th=[ 676], 00:10:09.702 | 99.99th=[ 701] 00:10:09.702 write: IOPS=2111, BW=8448KiB/s (8650kB/s)(8456KiB/1001msec); 0 zone resets 00:10:09.702 slat (usec): min=15, max=145, avg=23.98, stdev= 5.63 00:10:09.702 clat (usec): min=128, max=2468, avg=194.53, stdev=71.34 00:10:09.702 lat (usec): min=152, max=2508, avg=218.51, stdev=72.01 00:10:09.702 clat percentiles (usec): 00:10:09.702 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 174], 00:10:09.702 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:10:09.702 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 227], 00:10:09.702 | 99.00th=[ 306], 99.50th=[ 578], 99.90th=[ 881], 99.95th=[ 1401], 00:10:09.702 | 99.99th=[ 2474] 00:10:09.702 bw ( KiB/s): min= 8552, max= 8552, per=27.63%, avg=8552.00, stdev= 0.00, samples=1 00:10:09.702 iops : min= 2138, max= 2138, avg=2138.00, stdev= 0.00, samples=1 00:10:09.702 lat (usec) : 250=83.64%, 500=15.91%, 750=0.34%, 1000=0.07% 00:10:09.702 lat (msec) : 2=0.02%, 4=0.02% 00:10:09.702 cpu : usr=1.60%, sys=6.70%, ctx=4170, majf=0, minf=5 00:10:09.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.702 issued rwts: total=2048,2114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.702 job3: (groupid=0, jobs=1): err= 0: pid=67755: Thu Jul 25 13:52:58 2024 00:10:09.702 read: IOPS=1207, BW=4831KiB/s (4947kB/s)(4836KiB/1001msec) 00:10:09.702 slat (usec): min=15, max=124, avg=22.52, stdev= 5.68 00:10:09.702 clat (usec): min=268, max=679, avg=359.64, stdev=46.85 00:10:09.702 lat (usec): min=284, max=702, avg=382.16, stdev=48.16 00:10:09.702 clat percentiles (usec): 00:10:09.702 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 326], 00:10:09.702 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:10:09.702 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 453], 00:10:09.702 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 644], 99.95th=[ 676], 00:10:09.702 | 99.99th=[ 676] 00:10:09.702 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:09.702 slat (usec): min=18, max=194, avg=37.55, stdev=10.47 00:10:09.702 clat (usec): min=131, max=7759, avg=307.38, stdev=216.67 00:10:09.702 lat (usec): min=161, max=7788, avg=344.93, stdev=218.10 00:10:09.703 clat percentiles (usec): 00:10:09.703 | 1.00th=[ 159], 5.00th=[ 186], 10.00th=[ 225], 20.00th=[ 255], 00:10:09.703 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:10:09.703 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 408], 95.00th=[ 449], 00:10:09.703 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[ 2376], 99.95th=[ 7767], 00:10:09.703 | 99.99th=[ 7767] 00:10:09.703 bw ( KiB/s): min= 6264, max= 6264, per=20.24%, avg=6264.00, stdev= 0.00, samples=1 00:10:09.703 iops : min= 1566, max= 1566, avg=1566.00, stdev= 0.00, samples=1 00:10:09.703 lat (usec) : 250=9.98%, 500=88.27%, 750=1.57% 00:10:09.703 lat (msec) : 2=0.07%, 4=0.07%, 10=0.04% 00:10:09.703 cpu : usr=1.80%, sys=7.10%, ctx=2749, majf=0, minf=17 00:10:09.703 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.703 issued rwts: total=1209,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.703 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.703 00:10:09.703 Run status group 0 (all jobs): 00:10:09.703 READ: bw=26.3MiB/s (27.5MB/s), 4831KiB/s-8607KiB/s (4947kB/s-8814kB/s), io=26.3MiB (27.6MB), run=1001-1001msec 00:10:09.703 WRITE: bw=30.2MiB/s (31.7MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.3MiB (31.7MB), run=1001-1001msec 00:10:09.703 00:10:09.703 Disk stats (read/write): 00:10:09.703 nvme0n1: ios=1074/1458, merge=0/0, ticks=409/408, in_queue=817, util=89.77% 00:10:09.703 nvme0n2: ios=2084/2048, merge=0/0, ticks=470/342, in_queue=812, util=89.19% 00:10:09.703 nvme0n3: ios=1637/2048, merge=0/0, ticks=439/416, in_queue=855, util=90.37% 00:10:09.703 nvme0n4: ios=1030/1336, merge=0/0, ticks=380/430, in_queue=810, util=89.18% 00:10:09.703 13:52:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:09.703 [global] 00:10:09.703 thread=1 00:10:09.703 invalidate=1 00:10:09.703 rw=write 00:10:09.703 time_based=1 00:10:09.703 runtime=1 00:10:09.703 ioengine=libaio 00:10:09.703 direct=1 00:10:09.703 bs=4096 00:10:09.703 iodepth=128 00:10:09.703 norandommap=0 00:10:09.703 numjobs=1 00:10:09.703 00:10:09.703 verify_dump=1 00:10:09.703 verify_backlog=512 00:10:09.703 verify_state_save=0 00:10:09.703 do_verify=1 00:10:09.703 verify=crc32c-intel 00:10:09.703 [job0] 00:10:09.703 filename=/dev/nvme0n1 00:10:09.703 [job1] 00:10:09.703 filename=/dev/nvme0n2 00:10:09.703 [job2] 00:10:09.703 filename=/dev/nvme0n3 00:10:09.703 [job3] 00:10:09.703 filename=/dev/nvme0n4 00:10:09.703 Could not set queue depth (nvme0n1) 00:10:09.703 Could not set queue depth (nvme0n2) 00:10:09.703 Could not set queue depth (nvme0n3) 00:10:09.703 Could not set queue depth (nvme0n4) 00:10:09.703 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.703 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.703 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.703 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.703 fio-3.35 00:10:09.703 Starting 4 threads 00:10:11.080 00:10:11.080 job0: (groupid=0, jobs=1): err= 0: pid=67809: Thu Jul 25 13:52:59 2024 00:10:11.080 read: IOPS=4156, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1001msec) 00:10:11.080 slat (usec): min=6, max=6798, avg=108.75, stdev=530.03 00:10:11.080 clat (usec): min=363, max=26229, avg=14556.50, stdev=3170.17 00:10:11.080 lat (usec): min=2500, max=26245, avg=14665.25, stdev=3141.80 00:10:11.080 clat percentiles (usec): 00:10:11.080 | 1.00th=[ 5538], 5.00th=[11338], 10.00th=[11469], 20.00th=[11731], 00:10:11.080 | 30.00th=[11994], 40.00th=[13698], 50.00th=[15270], 60.00th=[15795], 00:10:11.080 | 70.00th=[16188], 80.00th=[16712], 90.00th=[17171], 95.00th=[17957], 00:10:11.080 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26346], 99.95th=[26346], 00:10:11.080 | 99.99th=[26346] 00:10:11.080 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:11.080 slat (usec): min=9, max=10304, avg=110.48, stdev=505.15 00:10:11.080 clat (usec): min=7502, max=19153, avg=14261.31, stdev=2461.62 00:10:11.080 lat (usec): min=7535, max=22889, avg=14371.79, stdev=2433.96 00:10:11.080 clat percentiles (usec): 00:10:11.080 | 1.00th=[ 9241], 5.00th=[10945], 10.00th=[11076], 20.00th=[11338], 00:10:11.080 | 30.00th=[11731], 40.00th=[13960], 50.00th=[15139], 60.00th=[15533], 00:10:11.080 | 70.00th=[15926], 80.00th=[16450], 90.00th=[16909], 95.00th=[17957], 00:10:11.080 | 99.00th=[18482], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:10:11.080 | 99.99th=[19268] 00:10:11.080 bw ( KiB/s): min=16416, max=16416, per=30.45%, avg=16416.00, stdev= 0.00, samples=1 00:10:11.080 iops : min= 4104, max= 4104, avg=4104.00, stdev= 0.00, samples=1 00:10:11.080 lat (usec) : 500=0.01% 00:10:11.080 lat (msec) : 4=0.36%, 10=1.70%, 20=96.50%, 50=1.43% 00:10:11.080 cpu : usr=5.40%, sys=12.30%, ctx=276, majf=0, minf=4 00:10:11.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:11.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.080 issued rwts: total=4161,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.080 job1: (groupid=0, jobs=1): err= 0: pid=67810: Thu Jul 25 13:52:59 2024 00:10:11.080 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:10:11.080 slat (usec): min=5, max=9867, avg=221.67, stdev=1169.50 00:10:11.080 clat (usec): min=18141, max=36599, avg=28824.31, stdev=4756.07 00:10:11.080 lat (usec): min=23450, max=36609, avg=29045.97, stdev=4646.47 00:10:11.080 clat percentiles (usec): 00:10:11.080 | 1.00th=[19268], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:10:11.080 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[32637], 00:10:11.080 | 70.00th=[32900], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:10:11.080 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:10:11.080 | 99.99th=[36439] 00:10:11.080 write: IOPS=2396, BW=9585KiB/s (9815kB/s)(9604KiB/1002msec); 0 zone resets 00:10:11.080 slat (usec): min=11, max=8635, avg=220.67, stdev=1123.45 00:10:11.080 clat (usec): min=136, max=34875, avg=27784.98, stdev=5921.85 00:10:11.080 lat (usec): min=2154, max=34902, avg=28005.65, stdev=5847.00 00:10:11.080 clat percentiles (usec): 00:10:11.080 | 1.00th=[ 2704], 5.00th=[19268], 10.00th=[22938], 20.00th=[24249], 00:10:11.080 | 30.00th=[24511], 40.00th=[24773], 50.00th=[30540], 60.00th=[31589], 00:10:11.080 | 70.00th=[32113], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:10:11.080 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:10:11.080 | 99.99th=[34866] 00:10:11.080 bw ( KiB/s): min= 8192, max= 8192, per=15.19%, avg=8192.00, stdev= 0.00, samples=1 00:10:11.080 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:11.080 lat (usec) : 250=0.02% 00:10:11.080 lat (msec) : 4=0.72%, 10=0.72%, 20=2.76%, 50=95.77% 00:10:11.080 cpu : usr=2.00%, sys=6.09%, ctx=140, majf=0, minf=11 00:10:11.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:11.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.080 issued rwts: total=2048,2401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.080 job2: (groupid=0, jobs=1): err= 0: pid=67811: Thu Jul 25 13:52:59 2024 00:10:11.080 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:10:11.080 slat (usec): min=5, max=11104, avg=221.72, stdev=1170.26 00:10:11.080 clat (usec): min=18163, max=36621, avg=28749.56, stdev=4755.79 00:10:11.080 lat (usec): min=23681, max=36631, avg=28971.28, stdev=4647.28 00:10:11.080 clat percentiles (usec): 00:10:11.080 | 1.00th=[19268], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:10:11.080 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25297], 60.00th=[32375], 00:10:11.080 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:10:11.080 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:10:11.080 | 99.99th=[36439] 00:10:11.080 write: IOPS=2359, BW=9438KiB/s (9665kB/s)(9476KiB/1004msec); 0 zone resets 00:10:11.080 slat (usec): min=11, max=8521, avg=222.82, stdev=1121.84 00:10:11.080 clat (usec): min=2648, max=34969, avg=28269.85, stdev=5100.77 00:10:11.080 lat (usec): min=7868, max=35009, avg=28492.67, stdev=5002.05 00:10:11.080 clat percentiles (usec): 00:10:11.080 | 1.00th=[ 8455], 5.00th=[20317], 10.00th=[23725], 20.00th=[24249], 00:10:11.080 | 30.00th=[24511], 40.00th=[25035], 50.00th=[30802], 60.00th=[31851], 00:10:11.080 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:10:11.080 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:10:11.080 | 99.99th=[34866] 00:10:11.080 bw ( KiB/s): min= 8192, max= 9724, per=16.61%, avg=8958.00, stdev=1083.29, samples=2 00:10:11.080 iops : min= 2048, max= 2431, avg=2239.50, stdev=270.82, samples=2 00:10:11.080 lat (msec) : 4=0.02%, 10=0.72%, 20=2.35%, 50=96.90% 00:10:11.080 cpu : usr=1.99%, sys=7.58%, ctx=139, majf=0, minf=11 00:10:11.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:11.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.080 issued rwts: total=2048,2369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.080 job3: (groupid=0, jobs=1): err= 0: pid=67812: Thu Jul 25 13:52:59 2024 00:10:11.080 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:11.080 slat (usec): min=6, max=4587, avg=113.38, stdev=547.49 00:10:11.080 clat (usec): min=8329, max=19824, avg=15157.06, stdev=2332.39 00:10:11.080 lat (usec): min=8341, max=19839, avg=15270.44, stdev=2284.27 00:10:11.080 clat percentiles (usec): 00:10:11.080 | 1.00th=[10290], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:10:11.080 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[15795], 00:10:11.080 | 70.00th=[16712], 80.00th=[17695], 90.00th=[18482], 95.00th=[19006], 00:10:11.080 | 99.00th=[19792], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792], 00:10:11.080 | 99.99th=[19792] 00:10:11.080 write: IOPS=4146, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1002msec); 0 zone resets 00:10:11.080 slat (usec): min=9, max=4430, avg=119.76, stdev=540.29 00:10:11.080 clat (usec): min=1650, max=20093, avg=15452.50, stdev=2873.74 00:10:11.080 lat (usec): min=1689, max=20119, avg=15572.26, stdev=2842.23 00:10:11.080 clat percentiles (usec): 00:10:11.080 | 1.00th=[ 5473], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:10:11.080 | 30.00th=[13173], 40.00th=[13698], 50.00th=[16319], 60.00th=[16712], 00:10:11.080 | 70.00th=[17433], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:10:11.080 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:10:11.080 | 99.99th=[20055] 00:10:11.080 bw ( KiB/s): min=15911, max=16854, per=30.38%, avg=16382.50, stdev=666.80, samples=2 00:10:11.080 iops : min= 3977, max= 4213, avg=4095.00, stdev=166.88, samples=2 00:10:11.080 lat (msec) : 2=0.13%, 4=0.19%, 10=0.86%, 20=98.59%, 50=0.22% 00:10:11.080 cpu : usr=3.40%, sys=13.79%, ctx=259, majf=0, minf=15 00:10:11.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:11.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.080 issued rwts: total=4096,4155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.080 00:10:11.080 Run status group 0 (all jobs): 00:10:11.080 READ: bw=48.1MiB/s (50.4MB/s), 8159KiB/s-16.2MiB/s (8355kB/s-17.0MB/s), io=48.3MiB (50.6MB), run=1001-1004msec 00:10:11.080 WRITE: bw=52.7MiB/s (55.2MB/s), 9438KiB/s-18.0MiB/s (9665kB/s-18.9MB/s), io=52.9MiB (55.4MB), run=1001-1004msec 00:10:11.080 00:10:11.080 Disk stats (read/write): 00:10:11.080 nvme0n1: ios=3570/3584, merge=0/0, ticks=11841/11788, in_queue=23629, util=86.66% 00:10:11.080 nvme0n2: ios=1624/2048, merge=0/0, ticks=10366/12289, in_queue=22655, util=86.93% 00:10:11.080 nvme0n3: ios=1600/2048, merge=0/0, ticks=11356/14115, in_queue=25471, util=88.75% 00:10:11.080 nvme0n4: ios=3200/3584, merge=0/0, ticks=11263/12613, in_queue=23876, util=89.61% 00:10:11.080 13:52:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:11.080 [global] 00:10:11.080 thread=1 00:10:11.080 invalidate=1 00:10:11.080 rw=randwrite 00:10:11.080 time_based=1 00:10:11.080 runtime=1 00:10:11.081 ioengine=libaio 00:10:11.081 direct=1 00:10:11.081 bs=4096 00:10:11.081 iodepth=128 00:10:11.081 norandommap=0 00:10:11.081 numjobs=1 00:10:11.081 00:10:11.081 verify_dump=1 00:10:11.081 verify_backlog=512 00:10:11.081 verify_state_save=0 00:10:11.081 do_verify=1 00:10:11.081 verify=crc32c-intel 00:10:11.081 [job0] 00:10:11.081 filename=/dev/nvme0n1 00:10:11.081 [job1] 00:10:11.081 filename=/dev/nvme0n2 00:10:11.081 [job2] 00:10:11.081 filename=/dev/nvme0n3 00:10:11.081 [job3] 00:10:11.081 filename=/dev/nvme0n4 00:10:11.081 Could not set queue depth (nvme0n1) 00:10:11.081 Could not set queue depth (nvme0n2) 00:10:11.081 Could not set queue depth (nvme0n3) 00:10:11.081 Could not set queue depth (nvme0n4) 00:10:11.081 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.081 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.081 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.081 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.081 fio-3.35 00:10:11.081 Starting 4 threads 00:10:12.536 00:10:12.536 job0: (groupid=0, jobs=1): err= 0: pid=67865: Thu Jul 25 13:53:01 2024 00:10:12.536 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:12.536 slat (usec): min=8, max=8399, avg=110.74, stdev=716.48 00:10:12.536 clat (usec): min=7935, max=25391, avg=15442.38, stdev=2020.93 00:10:12.536 lat (usec): min=7963, max=30491, avg=15553.13, stdev=2051.45 00:10:12.536 clat percentiles (usec): 00:10:12.536 | 1.00th=[ 9241], 5.00th=[13435], 10.00th=[13698], 20.00th=[14353], 00:10:12.536 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15664], 00:10:12.536 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17433], 95.00th=[18220], 00:10:12.536 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:10:12.536 | 99.99th=[25297] 00:10:12.536 write: IOPS=4288, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1004msec); 0 zone resets 00:10:12.536 slat (usec): min=12, max=12470, avg=118.98, stdev=745.10 00:10:12.536 clat (usec): min=3017, max=23775, avg=14821.82, stdev=2342.19 00:10:12.536 lat (usec): min=3035, max=23797, avg=14940.80, stdev=2256.34 00:10:12.536 clat percentiles (usec): 00:10:12.536 | 1.00th=[ 8717], 5.00th=[12256], 10.00th=[12649], 20.00th=[13304], 00:10:12.536 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:10:12.536 | 70.00th=[15664], 80.00th=[16188], 90.00th=[17433], 95.00th=[19006], 00:10:12.536 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23725], 99.95th=[23725], 00:10:12.536 | 99.99th=[23725] 00:10:12.536 bw ( KiB/s): min=16472, max=16992, per=34.91%, avg=16732.00, stdev=367.70, samples=2 00:10:12.536 iops : min= 4118, max= 4248, avg=4183.00, stdev=91.92, samples=2 00:10:12.536 lat (msec) : 4=0.24%, 10=2.08%, 20=95.74%, 50=1.94% 00:10:12.536 cpu : usr=4.19%, sys=11.27%, ctx=182, majf=0, minf=13 00:10:12.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:12.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.536 issued rwts: total=4096,4306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.536 job1: (groupid=0, jobs=1): err= 0: pid=67866: Thu Jul 25 13:53:01 2024 00:10:12.536 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:10:12.536 slat (usec): min=9, max=8843, avg=132.40, stdev=867.91 00:10:12.536 clat (usec): min=10260, max=29278, avg=18164.92, stdev=2101.33 00:10:12.536 lat (usec): min=10289, max=35635, avg=18297.32, stdev=2135.33 00:10:12.536 clat percentiles (usec): 00:10:12.536 | 1.00th=[11207], 5.00th=[15664], 10.00th=[16712], 20.00th=[17171], 00:10:12.536 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:10:12.536 | 70.00th=[18744], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:10:12.536 | 99.00th=[28181], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:10:12.536 | 99.99th=[29230] 00:10:12.536 write: IOPS=3637, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1007msec); 0 zone resets 00:10:12.536 slat (usec): min=5, max=14545, avg=137.24, stdev=885.48 00:10:12.536 clat (usec): min=1040, max=26130, avg=17060.37, stdev=2389.62 00:10:12.536 lat (usec): min=7267, max=26155, avg=17197.61, stdev=2259.48 00:10:12.536 clat percentiles (usec): 00:10:12.536 | 1.00th=[ 8225], 5.00th=[14353], 10.00th=[15270], 20.00th=[15926], 00:10:12.536 | 30.00th=[16319], 40.00th=[16581], 50.00th=[17171], 60.00th=[17433], 00:10:12.536 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19006], 95.00th=[19530], 00:10:12.536 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:10:12.536 | 99.99th=[26084] 00:10:12.536 bw ( KiB/s): min=12808, max=15864, per=29.91%, avg=14336.00, stdev=2160.92, samples=2 00:10:12.536 iops : min= 3202, max= 3966, avg=3584.00, stdev=540.23, samples=2 00:10:12.536 lat (msec) : 2=0.01%, 10=1.13%, 20=94.65%, 50=4.21% 00:10:12.536 cpu : usr=3.08%, sys=10.04%, ctx=157, majf=0, minf=11 00:10:12.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:12.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.536 issued rwts: total=3584,3663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.536 job2: (groupid=0, jobs=1): err= 0: pid=67867: Thu Jul 25 13:53:01 2024 00:10:12.536 read: IOPS=1853, BW=7415KiB/s (7593kB/s)(7452KiB/1005msec) 00:10:12.536 slat (usec): min=11, max=19727, avg=235.43, stdev=1570.55 00:10:12.536 clat (usec): min=3526, max=62538, avg=32737.09, stdev=5481.70 00:10:12.536 lat (usec): min=14780, max=69654, avg=32972.52, stdev=5409.54 00:10:12.536 clat percentiles (usec): 00:10:12.536 | 1.00th=[15139], 5.00th=[21103], 10.00th=[24773], 20.00th=[32113], 00:10:12.536 | 30.00th=[32375], 40.00th=[33162], 50.00th=[33424], 60.00th=[33817], 00:10:12.536 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35914], 95.00th=[35914], 00:10:12.537 | 99.00th=[54789], 99.50th=[57934], 99.90th=[62653], 99.95th=[62653], 00:10:12.537 | 99.99th=[62653] 00:10:12.537 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:10:12.537 slat (usec): min=9, max=36237, avg=265.15, stdev=1876.76 00:10:12.537 clat (usec): min=14948, max=54918, avg=32327.11, stdev=5054.27 00:10:12.537 lat (usec): min=24569, max=54964, avg=32592.27, stdev=4794.63 00:10:12.537 clat percentiles (usec): 00:10:12.537 | 1.00th=[18744], 5.00th=[27132], 10.00th=[28443], 20.00th=[29754], 00:10:12.537 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31851], 60.00th=[31851], 00:10:12.537 | 70.00th=[32637], 80.00th=[33424], 90.00th=[34866], 95.00th=[39584], 00:10:12.537 | 99.00th=[54264], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:10:12.537 | 99.99th=[54789] 00:10:12.537 bw ( KiB/s): min= 8192, max= 8208, per=17.11%, avg=8200.00, stdev=11.31, samples=2 00:10:12.537 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:10:12.537 lat (msec) : 4=0.03%, 20=2.74%, 50=95.01%, 100=2.22% 00:10:12.537 cpu : usr=1.79%, sys=6.77%, ctx=83, majf=0, minf=17 00:10:12.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:12.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.537 issued rwts: total=1863,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.537 job3: (groupid=0, jobs=1): err= 0: pid=67868: Thu Jul 25 13:53:01 2024 00:10:12.537 read: IOPS=1901, BW=7607KiB/s (7789kB/s)(7660KiB/1007msec) 00:10:12.537 slat (usec): min=9, max=25173, avg=258.77, stdev=1980.61 00:10:12.537 clat (usec): min=3406, max=55125, avg=33664.05, stdev=4834.27 00:10:12.537 lat (usec): min=16458, max=65702, avg=33922.82, stdev=5079.33 00:10:12.537 clat percentiles (usec): 00:10:12.537 | 1.00th=[16909], 5.00th=[24511], 10.00th=[27919], 20.00th=[32375], 00:10:12.537 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:10:12.537 | 70.00th=[34866], 80.00th=[35914], 90.00th=[39584], 95.00th=[41681], 00:10:12.537 | 99.00th=[43254], 99.50th=[51643], 99.90th=[54789], 99.95th=[55313], 00:10:12.537 | 99.99th=[55313] 00:10:12.537 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:10:12.537 slat (usec): min=10, max=21551, avg=238.69, stdev=1691.08 00:10:12.537 clat (usec): min=14036, max=42499, avg=30794.36, stdev=4699.27 00:10:12.537 lat (usec): min=14059, max=42551, avg=31033.04, stdev=4449.66 00:10:12.537 clat percentiles (usec): 00:10:12.537 | 1.00th=[15795], 5.00th=[18482], 10.00th=[24249], 20.00th=[28181], 00:10:12.537 | 30.00th=[30540], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:10:12.537 | 70.00th=[32637], 80.00th=[33817], 90.00th=[34866], 95.00th=[38536], 00:10:12.537 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:10:12.537 | 99.99th=[42730] 00:10:12.537 bw ( KiB/s): min= 8192, max= 8192, per=17.09%, avg=8192.00, stdev= 0.00, samples=2 00:10:12.537 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:12.537 lat (msec) : 4=0.03%, 20=4.11%, 50=95.61%, 100=0.25% 00:10:12.537 cpu : usr=1.99%, sys=6.36%, ctx=108, majf=0, minf=9 00:10:12.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:12.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.537 issued rwts: total=1915,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.537 00:10:12.537 Run status group 0 (all jobs): 00:10:12.537 READ: bw=44.4MiB/s (46.6MB/s), 7415KiB/s-15.9MiB/s (7593kB/s-16.7MB/s), io=44.8MiB (46.9MB), run=1004-1007msec 00:10:12.537 WRITE: bw=46.8MiB/s (49.1MB/s), 8135KiB/s-16.8MiB/s (8330kB/s-17.6MB/s), io=47.1MiB (49.4MB), run=1004-1007msec 00:10:12.537 00:10:12.537 Disk stats (read/write): 00:10:12.537 nvme0n1: ios=3500/3584, merge=0/0, ticks=50589/49205, in_queue=99794, util=86.57% 00:10:12.537 nvme0n2: ios=2943/3072, merge=0/0, ticks=51144/49717, in_queue=100861, util=86.34% 00:10:12.537 nvme0n3: ios=1536/1664, merge=0/0, ticks=49824/52012, in_queue=101836, util=88.77% 00:10:12.537 nvme0n4: ios=1536/1728, merge=0/0, ticks=51492/50681, in_queue=102173, util=89.53% 00:10:12.537 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:12.537 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67891 00:10:12.537 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:12.537 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:12.537 [global] 00:10:12.537 thread=1 00:10:12.537 invalidate=1 00:10:12.537 rw=read 00:10:12.537 time_based=1 00:10:12.537 runtime=10 00:10:12.537 ioengine=libaio 00:10:12.537 direct=1 00:10:12.537 bs=4096 00:10:12.537 iodepth=1 00:10:12.537 norandommap=1 00:10:12.537 numjobs=1 00:10:12.537 00:10:12.537 [job0] 00:10:12.537 filename=/dev/nvme0n1 00:10:12.537 [job1] 00:10:12.537 filename=/dev/nvme0n2 00:10:12.537 [job2] 00:10:12.537 filename=/dev/nvme0n3 00:10:12.537 [job3] 00:10:12.537 filename=/dev/nvme0n4 00:10:12.537 Could not set queue depth (nvme0n1) 00:10:12.537 Could not set queue depth (nvme0n2) 00:10:12.537 Could not set queue depth (nvme0n3) 00:10:12.537 Could not set queue depth (nvme0n4) 00:10:12.537 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.537 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.537 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.537 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.537 fio-3.35 00:10:12.537 Starting 4 threads 00:10:15.820 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:15.820 fio: pid=67935, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:15.820 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=42242048, buflen=4096 00:10:15.820 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:15.820 fio: pid=67934, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:15.820 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=62361600, buflen=4096 00:10:15.820 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:15.820 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:16.078 fio: pid=67932, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:16.078 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3457024, buflen=4096 00:10:16.336 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.336 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:16.336 fio: pid=67933, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:16.336 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55095296, buflen=4096 00:10:16.595 00:10:16.595 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67932: Thu Jul 25 13:53:05 2024 00:10:16.595 read: IOPS=4853, BW=19.0MiB/s (19.9MB/s)(67.3MiB/3550msec) 00:10:16.595 slat (usec): min=11, max=14282, avg=17.19, stdev=161.78 00:10:16.595 clat (usec): min=130, max=4075, avg=187.27, stdev=63.23 00:10:16.595 lat (usec): min=143, max=14508, avg=204.46, stdev=174.84 00:10:16.595 clat percentiles (usec): 00:10:16.595 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:16.595 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 188], 00:10:16.595 | 70.00th=[ 202], 80.00th=[ 219], 90.00th=[ 241], 95.00th=[ 258], 00:10:16.595 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 725], 99.95th=[ 1139], 00:10:16.595 | 99.99th=[ 2606] 00:10:16.595 bw ( KiB/s): min=16600, max=23192, per=34.33%, avg=20177.33, stdev=2913.05, samples=6 00:10:16.595 iops : min= 4150, max= 5798, avg=5044.33, stdev=728.26, samples=6 00:10:16.595 lat (usec) : 250=93.11%, 500=6.63%, 750=0.15%, 1000=0.04% 00:10:16.595 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:10:16.595 cpu : usr=1.78%, sys=6.26%, ctx=17249, majf=0, minf=1 00:10:16.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.595 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.595 issued rwts: total=17229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.595 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67933: Thu Jul 25 13:53:05 2024 00:10:16.595 read: IOPS=3515, BW=13.7MiB/s (14.4MB/s)(52.5MiB/3826msec) 00:10:16.595 slat (usec): min=8, max=15696, avg=20.58, stdev=257.22 00:10:16.595 clat (usec): min=3, max=4814, avg=262.31, stdev=80.04 00:10:16.595 lat (usec): min=166, max=16013, avg=282.89, stdev=269.91 00:10:16.595 clat percentiles (usec): 00:10:16.595 | 1.00th=[ 174], 5.00th=[ 192], 10.00th=[ 206], 20.00th=[ 221], 00:10:16.595 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:10:16.595 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 388], 95.00th=[ 416], 00:10:16.595 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 611], 99.95th=[ 988], 00:10:16.595 | 99.99th=[ 2409] 00:10:16.595 bw ( KiB/s): min= 9384, max=16256, per=23.52%, avg=13823.86, stdev=2159.13, samples=7 00:10:16.595 iops : min= 2346, max= 4064, avg=3455.86, stdev=539.80, samples=7 00:10:16.595 lat (usec) : 4=0.01%, 100=0.01%, 250=54.56%, 500=45.12%, 750=0.25% 00:10:16.595 lat (usec) : 1000=0.01% 00:10:16.595 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:10:16.595 cpu : usr=1.25%, sys=4.92%, ctx=13463, majf=0, minf=1 00:10:16.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.595 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.595 issued rwts: total=13452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.595 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67934: Thu Jul 25 13:53:05 2024 00:10:16.595 read: IOPS=4652, BW=18.2MiB/s (19.1MB/s)(59.5MiB/3273msec) 00:10:16.595 slat (usec): min=11, max=11739, avg=16.54, stdev=118.78 00:10:16.595 clat (usec): min=134, max=9576, avg=196.89, stdev=89.99 00:10:16.595 lat (usec): min=151, max=11978, avg=213.43, stdev=149.63 00:10:16.595 clat percentiles (usec): 00:10:16.595 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:16.595 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 196], 00:10:16.595 | 70.00th=[ 208], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 258], 00:10:16.595 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 562], 99.95th=[ 783], 00:10:16.595 | 99.99th=[ 3359] 00:10:16.595 bw ( KiB/s): min=16392, max=21008, per=32.30%, avg=18985.33, stdev=2070.79, samples=6 00:10:16.595 iops : min= 4098, max= 5252, avg=4746.33, stdev=517.70, samples=6 00:10:16.595 lat (usec) : 250=92.98%, 500=6.90%, 750=0.06%, 1000=0.02% 00:10:16.595 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:10:16.595 cpu : usr=1.56%, sys=6.36%, ctx=15231, majf=0, minf=1 00:10:16.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.595 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.595 issued rwts: total=15226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.595 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67935: Thu Jul 25 13:53:05 2024 00:10:16.595 read: IOPS=3434, BW=13.4MiB/s (14.1MB/s)(40.3MiB/3003msec) 00:10:16.595 slat (usec): min=8, max=120, avg=16.12, stdev= 5.95 00:10:16.595 clat (usec): min=161, max=7540, avg=273.16, stdev=125.79 00:10:16.595 lat (usec): min=177, max=7554, avg=289.28, stdev=127.43 00:10:16.595 clat percentiles (usec): 00:10:16.595 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 212], 20.00th=[ 233], 00:10:16.595 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:10:16.595 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 392], 95.00th=[ 412], 00:10:16.595 | 99.00th=[ 445], 99.50th=[ 465], 99.90th=[ 685], 99.95th=[ 2573], 00:10:16.595 | 99.99th=[ 5932] 00:10:16.595 bw ( KiB/s): min= 9384, max=16296, per=23.08%, avg=13563.20, stdev=2621.01, samples=5 00:10:16.595 iops : min= 2346, max= 4074, avg=3390.80, stdev=655.25, samples=5 00:10:16.595 lat (usec) : 250=43.91%, 500=55.81%, 750=0.18%, 1000=0.02% 00:10:16.595 lat (msec) : 2=0.01%, 4=0.04%, 10=0.02% 00:10:16.595 cpu : usr=1.17%, sys=5.10%, ctx=10315, majf=0, minf=1 00:10:16.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.595 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.595 issued rwts: total=10314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.595 00:10:16.595 Run status group 0 (all jobs): 00:10:16.595 READ: bw=57.4MiB/s (60.2MB/s), 13.4MiB/s-19.0MiB/s (14.1MB/s-19.9MB/s), io=220MiB (230MB), run=3003-3826msec 00:10:16.595 00:10:16.595 Disk stats (read/write): 00:10:16.595 nvme0n1: ios=16447/0, merge=0/0, ticks=3089/0, in_queue=3089, util=95.22% 00:10:16.595 nvme0n2: ios=12470/0, merge=0/0, ticks=3274/0, in_queue=3274, util=94.99% 00:10:16.595 nvme0n3: ios=14593/0, merge=0/0, ticks=2888/0, in_queue=2888, util=96.12% 00:10:16.595 nvme0n4: ios=9824/0, merge=0/0, ticks=2634/0, in_queue=2634, util=96.52% 00:10:16.595 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.595 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:16.853 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.853 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:17.111 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.111 13:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:17.368 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.368 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:17.626 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.626 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67891 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.883 nvmf hotplug test: fio failed as expected 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:17.883 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:18.142 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:18.142 rmmod nvme_tcp 00:10:18.142 rmmod nvme_fabrics 00:10:18.400 rmmod nvme_keyring 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67506 ']' 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67506 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 67506 ']' 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 67506 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67506 00:10:18.400 killing process with pid 67506 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67506' 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 67506 00:10:18.400 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 67506 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:18.660 ************************************ 00:10:18.660 END TEST nvmf_fio_target 00:10:18.660 ************************************ 00:10:18.660 00:10:18.660 real 0m19.843s 00:10:18.660 user 1m16.133s 00:10:18.660 sys 0m9.281s 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.660 ************************************ 00:10:18.660 START TEST nvmf_bdevio 00:10:18.660 ************************************ 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:18.660 * Looking for test storage... 00:10:18.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:18.660 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:18.919 Cannot find device "nvmf_tgt_br" 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.919 Cannot find device "nvmf_tgt_br2" 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:18.919 Cannot find device "nvmf_tgt_br" 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:18.919 Cannot find device "nvmf_tgt_br2" 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.919 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.177 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.177 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.177 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.177 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:19.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:10:19.177 00:10:19.177 --- 10.0.0.2 ping statistics --- 00:10:19.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.177 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:19.177 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:19.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:19.177 00:10:19.177 --- 10.0.0.3 ping statistics --- 00:10:19.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.177 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:19.177 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:19.178 00:10:19.178 --- 10.0.0.1 ping statistics --- 00:10:19.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.178 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.178 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68201 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68201 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 68201 ']' 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.178 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:19.178 [2024-07-25 13:53:08.061848] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:19.178 [2024-07-25 13:53:08.061941] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.178 [2024-07-25 13:53:08.200997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.435 [2024-07-25 13:53:08.332176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.435 [2024-07-25 13:53:08.332241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.435 [2024-07-25 13:53:08.332256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.435 [2024-07-25 13:53:08.332266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.435 [2024-07-25 13:53:08.332275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.435 [2024-07-25 13:53:08.332501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:19.435 [2024-07-25 13:53:08.332606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:19.435 [2024-07-25 13:53:08.333596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:19.435 [2024-07-25 13:53:08.333605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.435 [2024-07-25 13:53:08.390565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.368 [2024-07-25 13:53:09.147041] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.368 Malloc0 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.368 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.369 [2024-07-25 13:53:09.214202] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:20.369 { 00:10:20.369 "params": { 00:10:20.369 "name": "Nvme$subsystem", 00:10:20.369 "trtype": "$TEST_TRANSPORT", 00:10:20.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.369 "adrfam": "ipv4", 00:10:20.369 "trsvcid": "$NVMF_PORT", 00:10:20.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.369 "hdgst": ${hdgst:-false}, 00:10:20.369 "ddgst": ${ddgst:-false} 00:10:20.369 }, 00:10:20.369 "method": "bdev_nvme_attach_controller" 00:10:20.369 } 00:10:20.369 EOF 00:10:20.369 )") 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:20.369 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:20.369 "params": { 00:10:20.369 "name": "Nvme1", 00:10:20.369 "trtype": "tcp", 00:10:20.369 "traddr": "10.0.0.2", 00:10:20.369 "adrfam": "ipv4", 00:10:20.369 "trsvcid": "4420", 00:10:20.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.369 "hdgst": false, 00:10:20.369 "ddgst": false 00:10:20.369 }, 00:10:20.369 "method": "bdev_nvme_attach_controller" 00:10:20.369 }' 00:10:20.369 [2024-07-25 13:53:09.267663] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:20.369 [2024-07-25 13:53:09.267755] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68237 ] 00:10:20.626 [2024-07-25 13:53:09.408957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.626 [2024-07-25 13:53:09.547041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.626 [2024-07-25 13:53:09.547192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.626 [2024-07-25 13:53:09.547198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.626 [2024-07-25 13:53:09.613631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:20.885 I/O targets: 00:10:20.885 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:20.885 00:10:20.885 00:10:20.885 CUnit - A unit testing framework for C - Version 2.1-3 00:10:20.885 http://cunit.sourceforge.net/ 00:10:20.885 00:10:20.885 00:10:20.885 Suite: bdevio tests on: Nvme1n1 00:10:20.885 Test: blockdev write read block ...passed 00:10:20.885 Test: blockdev write zeroes read block ...passed 00:10:20.885 Test: blockdev write zeroes read no split ...passed 00:10:20.885 Test: blockdev write zeroes read split ...passed 00:10:20.885 Test: blockdev write zeroes read split partial ...passed 00:10:20.885 Test: blockdev reset ...[2024-07-25 13:53:09.772205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:20.885 [2024-07-25 13:53:09.772354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6797c0 (9): Bad file descriptor 00:10:20.885 [2024-07-25 13:53:09.784717] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.885 passed 00:10:20.885 Test: blockdev write read 8 blocks ...passed 00:10:20.885 Test: blockdev write read size > 128k ...passed 00:10:20.885 Test: blockdev write read invalid size ...passed 00:10:20.885 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.885 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.885 Test: blockdev write read max offset ...passed 00:10:20.885 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.885 Test: blockdev writev readv 8 blocks ...passed 00:10:20.885 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.885 Test: blockdev writev readv block ...passed 00:10:20.885 Test: blockdev writev readv size > 128k ...passed 00:10:20.885 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.885 Test: blockdev comparev and writev ...[2024-07-25 13:53:09.794118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.885 [2024-07-25 13:53:09.794174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.794196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.885 [2024-07-25 13:53:09.794208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.794851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.885 [2024-07-25 13:53:09.794893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.794912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.885 [2024-07-25 13:53:09.794924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.795346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.885 [2024-07-25 13:53:09.795380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.795398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.885 [2024-07-25 13:53:09.795409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.795793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.885 [2024-07-25 13:53:09.795825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.795843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.885 [2024-07-25 13:53:09.795853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:20.885 passed 00:10:20.885 Test: blockdev nvme passthru rw ...passed 00:10:20.885 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:53:09.797123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.885 [2024-07-25 13:53:09.797158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.797451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.885 [2024-07-25 13:53:09.797485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.797672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.885 [2024-07-25 13:53:09.797787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:20.885 [2024-07-25 13:53:09.798070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.885 [2024-07-25 13:53:09.798102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:20.885 passed 00:10:20.885 Test: blockdev nvme admin passthru ...passed 00:10:20.885 Test: blockdev copy ...passed 00:10:20.885 00:10:20.886 Run Summary: Type Total Ran Passed Failed Inactive 00:10:20.886 suites 1 1 n/a 0 0 00:10:20.886 tests 23 23 23 0 0 00:10:20.886 asserts 152 152 152 0 n/a 00:10:20.886 00:10:20.886 Elapsed time = 0.153 seconds 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:21.144 rmmod nvme_tcp 00:10:21.144 rmmod nvme_fabrics 00:10:21.144 rmmod nvme_keyring 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68201 ']' 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68201 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 68201 ']' 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 68201 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.144 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68201 00:10:21.403 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:21.403 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:21.403 killing process with pid 68201 00:10:21.403 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68201' 00:10:21.403 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 68201 00:10:21.403 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 68201 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:21.662 00:10:21.662 real 0m2.933s 00:10:21.662 user 0m9.877s 00:10:21.662 sys 0m0.749s 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.662 ************************************ 00:10:21.662 END TEST nvmf_bdevio 00:10:21.662 ************************************ 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:21.662 00:10:21.662 real 2m36.302s 00:10:21.662 user 7m0.716s 00:10:21.662 sys 0m51.798s 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.662 ************************************ 00:10:21.662 END TEST nvmf_target_core 00:10:21.662 ************************************ 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.662 13:53:10 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:21.662 13:53:10 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.662 13:53:10 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.662 13:53:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.662 ************************************ 00:10:21.662 START TEST nvmf_target_extra 00:10:21.662 ************************************ 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:21.662 * Looking for test storage... 00:10:21.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.662 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:21.663 ************************************ 00:10:21.663 START TEST nvmf_auth_target 00:10:21.663 ************************************ 00:10:21.663 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:21.922 * Looking for test storage... 00:10:21.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.922 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:21.923 Cannot find device "nvmf_tgt_br" 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.923 Cannot find device "nvmf_tgt_br2" 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:21.923 Cannot find device "nvmf_tgt_br" 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:21.923 Cannot find device "nvmf_tgt_br2" 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.923 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.182 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.182 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:22.182 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:22.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:22.183 00:10:22.183 --- 10.0.0.2 ping statistics --- 00:10:22.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.183 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:22.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:22.183 00:10:22.183 --- 10.0.0.3 ping statistics --- 00:10:22.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.183 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:10:22.183 00:10:22.183 --- 10.0.0.1 ping statistics --- 00:10:22.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.183 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68456 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68456 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68456 ']' 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.183 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=68492 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bde7a609bd43011b8e58b3f8ac1883a0ee6ed7a8e89a4987 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TJ0 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bde7a609bd43011b8e58b3f8ac1883a0ee6ed7a8e89a4987 0 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bde7a609bd43011b8e58b3f8ac1883a0ee6ed7a8e89a4987 0 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bde7a609bd43011b8e58b3f8ac1883a0ee6ed7a8e89a4987 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TJ0 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TJ0 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.TJ0 00:10:23.560 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5c361e85b6b9accd9dce5d4dc55a9609d0470158fa044b7285e55021ec8e2e5e 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Zhy 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5c361e85b6b9accd9dce5d4dc55a9609d0470158fa044b7285e55021ec8e2e5e 3 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5c361e85b6b9accd9dce5d4dc55a9609d0470158fa044b7285e55021ec8e2e5e 3 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5c361e85b6b9accd9dce5d4dc55a9609d0470158fa044b7285e55021ec8e2e5e 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Zhy 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Zhy 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Zhy 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=30ce650d412bc08eb6352c1273741a7c 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cxP 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 30ce650d412bc08eb6352c1273741a7c 1 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 30ce650d412bc08eb6352c1273741a7c 1 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=30ce650d412bc08eb6352c1273741a7c 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cxP 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cxP 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.cxP 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=05bef22f8c4f7cefe29e4dd21545ca2de4e0a4bce5661e3d 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.e71 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 05bef22f8c4f7cefe29e4dd21545ca2de4e0a4bce5661e3d 2 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 05bef22f8c4f7cefe29e4dd21545ca2de4e0a4bce5661e3d 2 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=05bef22f8c4f7cefe29e4dd21545ca2de4e0a4bce5661e3d 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.e71 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.e71 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.e71 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1f6f6c5ef274e67a8b0e65a09e0d8a530d82368121f574d0 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1DL 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1f6f6c5ef274e67a8b0e65a09e0d8a530d82368121f574d0 2 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1f6f6c5ef274e67a8b0e65a09e0d8a530d82368121f574d0 2 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1f6f6c5ef274e67a8b0e65a09e0d8a530d82368121f574d0 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:23.561 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:23.820 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1DL 00:10:23.820 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1DL 00:10:23.820 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.1DL 00:10:23.820 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f1b99b4cb0bd2e208abab47ce5de54a2 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.npU 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f1b99b4cb0bd2e208abab47ce5de54a2 1 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f1b99b4cb0bd2e208abab47ce5de54a2 1 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f1b99b4cb0bd2e208abab47ce5de54a2 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.npU 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.npU 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.npU 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a73e6462577fb47a7636cecf097121e3e3e019efa4e31ff6fbd773fea337856d 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JAE 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a73e6462577fb47a7636cecf097121e3e3e019efa4e31ff6fbd773fea337856d 3 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a73e6462577fb47a7636cecf097121e3e3e019efa4e31ff6fbd773fea337856d 3 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a73e6462577fb47a7636cecf097121e3e3e019efa4e31ff6fbd773fea337856d 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JAE 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JAE 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.JAE 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 68456 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68456 ']' 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.821 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.079 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.079 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:24.079 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 68492 /var/tmp/host.sock 00:10:24.079 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68492 ']' 00:10:24.079 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:24.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:24.080 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.080 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:24.080 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.080 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.338 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.338 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:24.338 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:24.338 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.338 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.338 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.338 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:24.338 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TJ0 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.TJ0 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.TJ0 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Zhy ]] 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zhy 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.595 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.852 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.852 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zhy 00:10:24.852 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zhy 00:10:25.111 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:25.111 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cxP 00:10:25.111 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.111 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.111 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.111 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cxP 00:10:25.111 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cxP 00:10:25.370 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.e71 ]] 00:10:25.370 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.e71 00:10:25.370 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.370 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.370 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.370 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.e71 00:10:25.370 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.e71 00:10:25.627 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:25.627 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1DL 00:10:25.627 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.627 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.627 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.627 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.1DL 00:10:25.627 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.1DL 00:10:25.885 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.npU ]] 00:10:25.885 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.npU 00:10:25.885 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.885 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.885 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.885 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.npU 00:10:25.885 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.npU 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JAE 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JAE 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JAE 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:26.450 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:26.707 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:26.707 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.708 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.274 00:10:27.274 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.274 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:27.274 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:27.532 { 00:10:27.532 "cntlid": 1, 00:10:27.532 "qid": 0, 00:10:27.532 "state": "enabled", 00:10:27.532 "thread": "nvmf_tgt_poll_group_000", 00:10:27.532 "listen_address": { 00:10:27.532 "trtype": "TCP", 00:10:27.532 "adrfam": "IPv4", 00:10:27.532 "traddr": "10.0.0.2", 00:10:27.532 "trsvcid": "4420" 00:10:27.532 }, 00:10:27.532 "peer_address": { 00:10:27.532 "trtype": "TCP", 00:10:27.532 "adrfam": "IPv4", 00:10:27.532 "traddr": "10.0.0.1", 00:10:27.532 "trsvcid": "55158" 00:10:27.532 }, 00:10:27.532 "auth": { 00:10:27.532 "state": "completed", 00:10:27.532 "digest": "sha256", 00:10:27.532 "dhgroup": "null" 00:10:27.532 } 00:10:27.532 } 00:10:27.532 ]' 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.532 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.789 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.060 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:33.060 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:33.319 { 00:10:33.319 "cntlid": 3, 00:10:33.319 "qid": 0, 00:10:33.319 "state": "enabled", 00:10:33.319 "thread": "nvmf_tgt_poll_group_000", 00:10:33.319 "listen_address": { 00:10:33.319 "trtype": "TCP", 00:10:33.319 "adrfam": "IPv4", 00:10:33.319 "traddr": "10.0.0.2", 00:10:33.319 "trsvcid": "4420" 00:10:33.319 }, 00:10:33.319 "peer_address": { 00:10:33.319 "trtype": "TCP", 00:10:33.319 "adrfam": "IPv4", 00:10:33.319 "traddr": "10.0.0.1", 00:10:33.319 "trsvcid": "50104" 00:10:33.319 }, 00:10:33.319 "auth": { 00:10:33.319 "state": "completed", 00:10:33.319 "digest": "sha256", 00:10:33.319 "dhgroup": "null" 00:10:33.319 } 00:10:33.319 } 00:10:33.319 ]' 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.319 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:33.577 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:33.577 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:33.577 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.577 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.577 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.836 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:10:34.770 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.770 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:34.770 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.770 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.770 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.770 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.771 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.337 00:10:35.337 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.337 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.337 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.595 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.596 { 00:10:35.596 "cntlid": 5, 00:10:35.596 "qid": 0, 00:10:35.596 "state": "enabled", 00:10:35.596 "thread": "nvmf_tgt_poll_group_000", 00:10:35.596 "listen_address": { 00:10:35.596 "trtype": "TCP", 00:10:35.596 "adrfam": "IPv4", 00:10:35.596 "traddr": "10.0.0.2", 00:10:35.596 "trsvcid": "4420" 00:10:35.596 }, 00:10:35.596 "peer_address": { 00:10:35.596 "trtype": "TCP", 00:10:35.596 "adrfam": "IPv4", 00:10:35.596 "traddr": "10.0.0.1", 00:10:35.596 "trsvcid": "50122" 00:10:35.596 }, 00:10:35.596 "auth": { 00:10:35.596 "state": "completed", 00:10:35.596 "digest": "sha256", 00:10:35.596 "dhgroup": "null" 00:10:35.596 } 00:10:35.596 } 00:10:35.596 ]' 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.596 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.854 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:10:36.794 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.794 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:36.794 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.794 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.794 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.794 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.794 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.794 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.052 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.311 00:10:37.311 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:37.311 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.311 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.569 { 00:10:37.569 "cntlid": 7, 00:10:37.569 "qid": 0, 00:10:37.569 "state": "enabled", 00:10:37.569 "thread": "nvmf_tgt_poll_group_000", 00:10:37.569 "listen_address": { 00:10:37.569 "trtype": "TCP", 00:10:37.569 "adrfam": "IPv4", 00:10:37.569 "traddr": "10.0.0.2", 00:10:37.569 "trsvcid": "4420" 00:10:37.569 }, 00:10:37.569 "peer_address": { 00:10:37.569 "trtype": "TCP", 00:10:37.569 "adrfam": "IPv4", 00:10:37.569 "traddr": "10.0.0.1", 00:10:37.569 "trsvcid": "50142" 00:10:37.569 }, 00:10:37.569 "auth": { 00:10:37.569 "state": "completed", 00:10:37.569 "digest": "sha256", 00:10:37.569 "dhgroup": "null" 00:10:37.569 } 00:10:37.569 } 00:10:37.569 ]' 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:37.569 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.844 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.844 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.844 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.126 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:38.693 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.951 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.952 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.952 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.519 00:10:39.519 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.519 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.519 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.778 { 00:10:39.778 "cntlid": 9, 00:10:39.778 "qid": 0, 00:10:39.778 "state": "enabled", 00:10:39.778 "thread": "nvmf_tgt_poll_group_000", 00:10:39.778 "listen_address": { 00:10:39.778 "trtype": "TCP", 00:10:39.778 "adrfam": "IPv4", 00:10:39.778 "traddr": "10.0.0.2", 00:10:39.778 "trsvcid": "4420" 00:10:39.778 }, 00:10:39.778 "peer_address": { 00:10:39.778 "trtype": "TCP", 00:10:39.778 "adrfam": "IPv4", 00:10:39.778 "traddr": "10.0.0.1", 00:10:39.778 "trsvcid": "50174" 00:10:39.778 }, 00:10:39.778 "auth": { 00:10:39.778 "state": "completed", 00:10:39.778 "digest": "sha256", 00:10:39.778 "dhgroup": "ffdhe2048" 00:10:39.778 } 00:10:39.778 } 00:10:39.778 ]' 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.778 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.345 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:10:40.911 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.911 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:40.911 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.911 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.911 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.911 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.911 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:40.911 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.170 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.429 00:10:41.687 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.687 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.687 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.945 { 00:10:41.945 "cntlid": 11, 00:10:41.945 "qid": 0, 00:10:41.945 "state": "enabled", 00:10:41.945 "thread": "nvmf_tgt_poll_group_000", 00:10:41.945 "listen_address": { 00:10:41.945 "trtype": "TCP", 00:10:41.945 "adrfam": "IPv4", 00:10:41.945 "traddr": "10.0.0.2", 00:10:41.945 "trsvcid": "4420" 00:10:41.945 }, 00:10:41.945 "peer_address": { 00:10:41.945 "trtype": "TCP", 00:10:41.945 "adrfam": "IPv4", 00:10:41.945 "traddr": "10.0.0.1", 00:10:41.945 "trsvcid": "50204" 00:10:41.945 }, 00:10:41.945 "auth": { 00:10:41.945 "state": "completed", 00:10:41.945 "digest": "sha256", 00:10:41.945 "dhgroup": "ffdhe2048" 00:10:41.945 } 00:10:41.945 } 00:10:41.945 ]' 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.945 13:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.512 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:10:43.077 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.077 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:43.077 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.077 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.077 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.077 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.077 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:43.077 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.335 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.336 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.903 00:10:43.903 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.903 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.903 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.161 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.161 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.161 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.161 13:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.161 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.161 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.161 { 00:10:44.161 "cntlid": 13, 00:10:44.161 "qid": 0, 00:10:44.161 "state": "enabled", 00:10:44.161 "thread": "nvmf_tgt_poll_group_000", 00:10:44.161 "listen_address": { 00:10:44.161 "trtype": "TCP", 00:10:44.161 "adrfam": "IPv4", 00:10:44.161 "traddr": "10.0.0.2", 00:10:44.161 "trsvcid": "4420" 00:10:44.161 }, 00:10:44.161 "peer_address": { 00:10:44.161 "trtype": "TCP", 00:10:44.161 "adrfam": "IPv4", 00:10:44.161 "traddr": "10.0.0.1", 00:10:44.161 "trsvcid": "39442" 00:10:44.161 }, 00:10:44.161 "auth": { 00:10:44.161 "state": "completed", 00:10:44.161 "digest": "sha256", 00:10:44.162 "dhgroup": "ffdhe2048" 00:10:44.162 } 00:10:44.162 } 00:10:44.162 ]' 00:10:44.162 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.162 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.162 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:44.162 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:44.162 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:44.162 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.162 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.162 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.420 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:45.379 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:45.947 00:10:45.947 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:45.947 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.947 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.205 { 00:10:46.205 "cntlid": 15, 00:10:46.205 "qid": 0, 00:10:46.205 "state": "enabled", 00:10:46.205 "thread": "nvmf_tgt_poll_group_000", 00:10:46.205 "listen_address": { 00:10:46.205 "trtype": "TCP", 00:10:46.205 "adrfam": "IPv4", 00:10:46.205 "traddr": "10.0.0.2", 00:10:46.205 "trsvcid": "4420" 00:10:46.205 }, 00:10:46.205 "peer_address": { 00:10:46.205 "trtype": "TCP", 00:10:46.205 "adrfam": "IPv4", 00:10:46.205 "traddr": "10.0.0.1", 00:10:46.205 "trsvcid": "39480" 00:10:46.205 }, 00:10:46.205 "auth": { 00:10:46.205 "state": "completed", 00:10:46.205 "digest": "sha256", 00:10:46.205 "dhgroup": "ffdhe2048" 00:10:46.205 } 00:10:46.205 } 00:10:46.205 ]' 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.205 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.206 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:46.206 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.206 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.206 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.206 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.464 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:10:47.399 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.400 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:47.400 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.400 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.400 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.400 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:47.400 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.400 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:47.400 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.661 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.922 00:10:47.922 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.922 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.922 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.190 { 00:10:48.190 "cntlid": 17, 00:10:48.190 "qid": 0, 00:10:48.190 "state": "enabled", 00:10:48.190 "thread": "nvmf_tgt_poll_group_000", 00:10:48.190 "listen_address": { 00:10:48.190 "trtype": "TCP", 00:10:48.190 "adrfam": "IPv4", 00:10:48.190 "traddr": "10.0.0.2", 00:10:48.190 "trsvcid": "4420" 00:10:48.190 }, 00:10:48.190 "peer_address": { 00:10:48.190 "trtype": "TCP", 00:10:48.190 "adrfam": "IPv4", 00:10:48.190 "traddr": "10.0.0.1", 00:10:48.190 "trsvcid": "39502" 00:10:48.190 }, 00:10:48.190 "auth": { 00:10:48.190 "state": "completed", 00:10:48.190 "digest": "sha256", 00:10:48.190 "dhgroup": "ffdhe3072" 00:10:48.190 } 00:10:48.190 } 00:10:48.190 ]' 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:48.190 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.455 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.455 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.455 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.716 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:10:49.287 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.287 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:49.287 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.287 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.287 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.287 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.287 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.287 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.545 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.111 00:10:50.111 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.111 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.111 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.369 { 00:10:50.369 "cntlid": 19, 00:10:50.369 "qid": 0, 00:10:50.369 "state": "enabled", 00:10:50.369 "thread": "nvmf_tgt_poll_group_000", 00:10:50.369 "listen_address": { 00:10:50.369 "trtype": "TCP", 00:10:50.369 "adrfam": "IPv4", 00:10:50.369 "traddr": "10.0.0.2", 00:10:50.369 "trsvcid": "4420" 00:10:50.369 }, 00:10:50.369 "peer_address": { 00:10:50.369 "trtype": "TCP", 00:10:50.369 "adrfam": "IPv4", 00:10:50.369 "traddr": "10.0.0.1", 00:10:50.369 "trsvcid": "39540" 00:10:50.369 }, 00:10:50.369 "auth": { 00:10:50.369 "state": "completed", 00:10:50.369 "digest": "sha256", 00:10:50.369 "dhgroup": "ffdhe3072" 00:10:50.369 } 00:10:50.369 } 00:10:50.369 ]' 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.369 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.627 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:10:51.558 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.558 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:51.558 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.558 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.558 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.558 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.558 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.558 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.817 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.075 00:10:52.075 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.075 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.075 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.333 { 00:10:52.333 "cntlid": 21, 00:10:52.333 "qid": 0, 00:10:52.333 "state": "enabled", 00:10:52.333 "thread": "nvmf_tgt_poll_group_000", 00:10:52.333 "listen_address": { 00:10:52.333 "trtype": "TCP", 00:10:52.333 "adrfam": "IPv4", 00:10:52.333 "traddr": "10.0.0.2", 00:10:52.333 "trsvcid": "4420" 00:10:52.333 }, 00:10:52.333 "peer_address": { 00:10:52.333 "trtype": "TCP", 00:10:52.333 "adrfam": "IPv4", 00:10:52.333 "traddr": "10.0.0.1", 00:10:52.333 "trsvcid": "39552" 00:10:52.333 }, 00:10:52.333 "auth": { 00:10:52.333 "state": "completed", 00:10:52.333 "digest": "sha256", 00:10:52.333 "dhgroup": "ffdhe3072" 00:10:52.333 } 00:10:52.333 } 00:10:52.333 ]' 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.333 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.591 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:52.591 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.591 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.591 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.591 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.849 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:10:53.477 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.477 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:53.477 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.477 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.477 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.477 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.477 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:53.477 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:53.736 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.301 00:10:54.301 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.301 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.301 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.569 { 00:10:54.569 "cntlid": 23, 00:10:54.569 "qid": 0, 00:10:54.569 "state": "enabled", 00:10:54.569 "thread": "nvmf_tgt_poll_group_000", 00:10:54.569 "listen_address": { 00:10:54.569 "trtype": "TCP", 00:10:54.569 "adrfam": "IPv4", 00:10:54.569 "traddr": "10.0.0.2", 00:10:54.569 "trsvcid": "4420" 00:10:54.569 }, 00:10:54.569 "peer_address": { 00:10:54.569 "trtype": "TCP", 00:10:54.569 "adrfam": "IPv4", 00:10:54.569 "traddr": "10.0.0.1", 00:10:54.569 "trsvcid": "40100" 00:10:54.569 }, 00:10:54.569 "auth": { 00:10:54.569 "state": "completed", 00:10:54.569 "digest": "sha256", 00:10:54.569 "dhgroup": "ffdhe3072" 00:10:54.569 } 00:10:54.569 } 00:10:54.569 ]' 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.569 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.137 13:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:55.703 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.961 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.962 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.962 13:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.220 00:10:56.478 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.478 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.478 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.736 { 00:10:56.736 "cntlid": 25, 00:10:56.736 "qid": 0, 00:10:56.736 "state": "enabled", 00:10:56.736 "thread": "nvmf_tgt_poll_group_000", 00:10:56.736 "listen_address": { 00:10:56.736 "trtype": "TCP", 00:10:56.736 "adrfam": "IPv4", 00:10:56.736 "traddr": "10.0.0.2", 00:10:56.736 "trsvcid": "4420" 00:10:56.736 }, 00:10:56.736 "peer_address": { 00:10:56.736 "trtype": "TCP", 00:10:56.736 "adrfam": "IPv4", 00:10:56.736 "traddr": "10.0.0.1", 00:10:56.736 "trsvcid": "40134" 00:10:56.736 }, 00:10:56.736 "auth": { 00:10:56.736 "state": "completed", 00:10:56.736 "digest": "sha256", 00:10:56.736 "dhgroup": "ffdhe4096" 00:10:56.736 } 00:10:56.736 } 00:10:56.736 ]' 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.736 13:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.302 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:10:57.869 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.869 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:10:57.869 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.869 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.869 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.869 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.869 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:57.869 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.127 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.704 00:10:58.704 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.704 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.704 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.961 { 00:10:58.961 "cntlid": 27, 00:10:58.961 "qid": 0, 00:10:58.961 "state": "enabled", 00:10:58.961 "thread": "nvmf_tgt_poll_group_000", 00:10:58.961 "listen_address": { 00:10:58.961 "trtype": "TCP", 00:10:58.961 "adrfam": "IPv4", 00:10:58.961 "traddr": "10.0.0.2", 00:10:58.961 "trsvcid": "4420" 00:10:58.961 }, 00:10:58.961 "peer_address": { 00:10:58.961 "trtype": "TCP", 00:10:58.961 "adrfam": "IPv4", 00:10:58.961 "traddr": "10.0.0.1", 00:10:58.961 "trsvcid": "40166" 00:10:58.961 }, 00:10:58.961 "auth": { 00:10:58.961 "state": "completed", 00:10:58.961 "digest": "sha256", 00:10:58.961 "dhgroup": "ffdhe4096" 00:10:58.961 } 00:10:58.961 } 00:10:58.961 ]' 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:58.961 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.219 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.219 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.219 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.476 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:11:00.041 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.041 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:00.041 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.041 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.041 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.041 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.041 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.041 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.299 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.557 00:11:00.557 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.557 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.557 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.123 { 00:11:01.123 "cntlid": 29, 00:11:01.123 "qid": 0, 00:11:01.123 "state": "enabled", 00:11:01.123 "thread": "nvmf_tgt_poll_group_000", 00:11:01.123 "listen_address": { 00:11:01.123 "trtype": "TCP", 00:11:01.123 "adrfam": "IPv4", 00:11:01.123 "traddr": "10.0.0.2", 00:11:01.123 "trsvcid": "4420" 00:11:01.123 }, 00:11:01.123 "peer_address": { 00:11:01.123 "trtype": "TCP", 00:11:01.123 "adrfam": "IPv4", 00:11:01.123 "traddr": "10.0.0.1", 00:11:01.123 "trsvcid": "40188" 00:11:01.123 }, 00:11:01.123 "auth": { 00:11:01.123 "state": "completed", 00:11:01.123 "digest": "sha256", 00:11:01.123 "dhgroup": "ffdhe4096" 00:11:01.123 } 00:11:01.123 } 00:11:01.123 ]' 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:01.123 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.123 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.123 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.123 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.381 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:11:01.958 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.958 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:01.958 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.958 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.958 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.958 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.958 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:01.958 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.216 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.783 00:11:02.783 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.783 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.783 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.783 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.783 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.783 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.783 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.042 { 00:11:03.042 "cntlid": 31, 00:11:03.042 "qid": 0, 00:11:03.042 "state": "enabled", 00:11:03.042 "thread": "nvmf_tgt_poll_group_000", 00:11:03.042 "listen_address": { 00:11:03.042 "trtype": "TCP", 00:11:03.042 "adrfam": "IPv4", 00:11:03.042 "traddr": "10.0.0.2", 00:11:03.042 "trsvcid": "4420" 00:11:03.042 }, 00:11:03.042 "peer_address": { 00:11:03.042 "trtype": "TCP", 00:11:03.042 "adrfam": "IPv4", 00:11:03.042 "traddr": "10.0.0.1", 00:11:03.042 "trsvcid": "41030" 00:11:03.042 }, 00:11:03.042 "auth": { 00:11:03.042 "state": "completed", 00:11:03.042 "digest": "sha256", 00:11:03.042 "dhgroup": "ffdhe4096" 00:11:03.042 } 00:11:03.042 } 00:11:03.042 ]' 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.042 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.300 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:04.231 13:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.231 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.810 00:11:04.810 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.810 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.810 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.068 { 00:11:05.068 "cntlid": 33, 00:11:05.068 "qid": 0, 00:11:05.068 "state": "enabled", 00:11:05.068 "thread": "nvmf_tgt_poll_group_000", 00:11:05.068 "listen_address": { 00:11:05.068 "trtype": "TCP", 00:11:05.068 "adrfam": "IPv4", 00:11:05.068 "traddr": "10.0.0.2", 00:11:05.068 "trsvcid": "4420" 00:11:05.068 }, 00:11:05.068 "peer_address": { 00:11:05.068 "trtype": "TCP", 00:11:05.068 "adrfam": "IPv4", 00:11:05.068 "traddr": "10.0.0.1", 00:11:05.068 "trsvcid": "41066" 00:11:05.068 }, 00:11:05.068 "auth": { 00:11:05.068 "state": "completed", 00:11:05.068 "digest": "sha256", 00:11:05.068 "dhgroup": "ffdhe6144" 00:11:05.068 } 00:11:05.068 } 00:11:05.068 ]' 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.068 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.327 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:05.327 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.327 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.327 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.327 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.585 13:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:11:06.175 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.436 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:06.436 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.436 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.436 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.436 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.436 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:06.436 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.694 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.954 00:11:06.954 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.954 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.954 13:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.214 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.214 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.214 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.214 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.214 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.214 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.214 { 00:11:07.214 "cntlid": 35, 00:11:07.214 "qid": 0, 00:11:07.214 "state": "enabled", 00:11:07.214 "thread": "nvmf_tgt_poll_group_000", 00:11:07.214 "listen_address": { 00:11:07.214 "trtype": "TCP", 00:11:07.214 "adrfam": "IPv4", 00:11:07.214 "traddr": "10.0.0.2", 00:11:07.214 "trsvcid": "4420" 00:11:07.214 }, 00:11:07.214 "peer_address": { 00:11:07.214 "trtype": "TCP", 00:11:07.214 "adrfam": "IPv4", 00:11:07.214 "traddr": "10.0.0.1", 00:11:07.214 "trsvcid": "41092" 00:11:07.214 }, 00:11:07.214 "auth": { 00:11:07.214 "state": "completed", 00:11:07.214 "digest": "sha256", 00:11:07.214 "dhgroup": "ffdhe6144" 00:11:07.214 } 00:11:07.214 } 00:11:07.214 ]' 00:11:07.214 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.473 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.473 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.473 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:07.473 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.473 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.473 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.473 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.731 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:11:08.298 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.298 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:08.298 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.298 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.298 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.298 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.298 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:08.298 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.557 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.123 00:11:09.123 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.123 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.123 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.382 { 00:11:09.382 "cntlid": 37, 00:11:09.382 "qid": 0, 00:11:09.382 "state": "enabled", 00:11:09.382 "thread": "nvmf_tgt_poll_group_000", 00:11:09.382 "listen_address": { 00:11:09.382 "trtype": "TCP", 00:11:09.382 "adrfam": "IPv4", 00:11:09.382 "traddr": "10.0.0.2", 00:11:09.382 "trsvcid": "4420" 00:11:09.382 }, 00:11:09.382 "peer_address": { 00:11:09.382 "trtype": "TCP", 00:11:09.382 "adrfam": "IPv4", 00:11:09.382 "traddr": "10.0.0.1", 00:11:09.382 "trsvcid": "41106" 00:11:09.382 }, 00:11:09.382 "auth": { 00:11:09.382 "state": "completed", 00:11:09.382 "digest": "sha256", 00:11:09.382 "dhgroup": "ffdhe6144" 00:11:09.382 } 00:11:09.382 } 00:11:09.382 ]' 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.382 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.641 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:09.641 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.641 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.641 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.641 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.899 13:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:11:10.467 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.726 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:11.293 00:11:11.293 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.293 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.293 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.551 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.551 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.551 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.551 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.551 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.551 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.551 { 00:11:11.551 "cntlid": 39, 00:11:11.551 "qid": 0, 00:11:11.551 "state": "enabled", 00:11:11.551 "thread": "nvmf_tgt_poll_group_000", 00:11:11.551 "listen_address": { 00:11:11.551 "trtype": "TCP", 00:11:11.551 "adrfam": "IPv4", 00:11:11.551 "traddr": "10.0.0.2", 00:11:11.551 "trsvcid": "4420" 00:11:11.551 }, 00:11:11.551 "peer_address": { 00:11:11.551 "trtype": "TCP", 00:11:11.551 "adrfam": "IPv4", 00:11:11.551 "traddr": "10.0.0.1", 00:11:11.551 "trsvcid": "41124" 00:11:11.551 }, 00:11:11.551 "auth": { 00:11:11.551 "state": "completed", 00:11:11.551 "digest": "sha256", 00:11:11.551 "dhgroup": "ffdhe6144" 00:11:11.551 } 00:11:11.551 } 00:11:11.551 ]' 00:11:11.551 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.810 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.810 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.810 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:11.810 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.810 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.810 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.810 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.068 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:11:13.003 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.003 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:13.003 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.003 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.003 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.003 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.003 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.004 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.004 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.004 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.004 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.940 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.940 { 00:11:13.940 "cntlid": 41, 00:11:13.940 "qid": 0, 00:11:13.940 "state": "enabled", 00:11:13.940 "thread": "nvmf_tgt_poll_group_000", 00:11:13.940 "listen_address": { 00:11:13.940 "trtype": "TCP", 00:11:13.940 "adrfam": "IPv4", 00:11:13.940 "traddr": "10.0.0.2", 00:11:13.940 "trsvcid": "4420" 00:11:13.940 }, 00:11:13.940 "peer_address": { 00:11:13.940 "trtype": "TCP", 00:11:13.940 "adrfam": "IPv4", 00:11:13.940 "traddr": "10.0.0.1", 00:11:13.940 "trsvcid": "53152" 00:11:13.940 }, 00:11:13.940 "auth": { 00:11:13.940 "state": "completed", 00:11:13.940 "digest": "sha256", 00:11:13.940 "dhgroup": "ffdhe8192" 00:11:13.940 } 00:11:13.940 } 00:11:13.940 ]' 00:11:13.940 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.198 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.198 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.198 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:14.198 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.198 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.198 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.198 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.457 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:11:15.023 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.023 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:15.023 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.023 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.023 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.023 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.023 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:15.023 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.590 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.159 00:11:16.159 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.159 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.159 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.417 { 00:11:16.417 "cntlid": 43, 00:11:16.417 "qid": 0, 00:11:16.417 "state": "enabled", 00:11:16.417 "thread": "nvmf_tgt_poll_group_000", 00:11:16.417 "listen_address": { 00:11:16.417 "trtype": "TCP", 00:11:16.417 "adrfam": "IPv4", 00:11:16.417 "traddr": "10.0.0.2", 00:11:16.417 "trsvcid": "4420" 00:11:16.417 }, 00:11:16.417 "peer_address": { 00:11:16.417 "trtype": "TCP", 00:11:16.417 "adrfam": "IPv4", 00:11:16.417 "traddr": "10.0.0.1", 00:11:16.417 "trsvcid": "53184" 00:11:16.417 }, 00:11:16.417 "auth": { 00:11:16.417 "state": "completed", 00:11:16.417 "digest": "sha256", 00:11:16.417 "dhgroup": "ffdhe8192" 00:11:16.417 } 00:11:16.417 } 00:11:16.417 ]' 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:16.417 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.676 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.676 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.676 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.934 13:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:11:17.502 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.502 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:17.502 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.502 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.502 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.502 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.502 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.502 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.760 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.761 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.761 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.697 00:11:18.697 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.697 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.697 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.956 { 00:11:18.956 "cntlid": 45, 00:11:18.956 "qid": 0, 00:11:18.956 "state": "enabled", 00:11:18.956 "thread": "nvmf_tgt_poll_group_000", 00:11:18.956 "listen_address": { 00:11:18.956 "trtype": "TCP", 00:11:18.956 "adrfam": "IPv4", 00:11:18.956 "traddr": "10.0.0.2", 00:11:18.956 "trsvcid": "4420" 00:11:18.956 }, 00:11:18.956 "peer_address": { 00:11:18.956 "trtype": "TCP", 00:11:18.956 "adrfam": "IPv4", 00:11:18.956 "traddr": "10.0.0.1", 00:11:18.956 "trsvcid": "53224" 00:11:18.956 }, 00:11:18.956 "auth": { 00:11:18.956 "state": "completed", 00:11:18.956 "digest": "sha256", 00:11:18.956 "dhgroup": "ffdhe8192" 00:11:18.956 } 00:11:18.956 } 00:11:18.956 ]' 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.956 13:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.216 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:11:20.363 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.363 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:20.363 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.363 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.363 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.363 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.363 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:20.363 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:20.363 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:20.930 00:11:20.930 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.930 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.930 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.188 { 00:11:21.188 "cntlid": 47, 00:11:21.188 "qid": 0, 00:11:21.188 "state": "enabled", 00:11:21.188 "thread": "nvmf_tgt_poll_group_000", 00:11:21.188 "listen_address": { 00:11:21.188 "trtype": "TCP", 00:11:21.188 "adrfam": "IPv4", 00:11:21.188 "traddr": "10.0.0.2", 00:11:21.188 "trsvcid": "4420" 00:11:21.188 }, 00:11:21.188 "peer_address": { 00:11:21.188 "trtype": "TCP", 00:11:21.188 "adrfam": "IPv4", 00:11:21.188 "traddr": "10.0.0.1", 00:11:21.188 "trsvcid": "53248" 00:11:21.188 }, 00:11:21.188 "auth": { 00:11:21.188 "state": "completed", 00:11:21.188 "digest": "sha256", 00:11:21.188 "dhgroup": "ffdhe8192" 00:11:21.188 } 00:11:21.188 } 00:11:21.188 ]' 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:21.188 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.447 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.447 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.447 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.705 13:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:22.270 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.529 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.096 00:11:23.096 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.096 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.096 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.096 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.096 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.096 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.096 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.096 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.096 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.096 { 00:11:23.096 "cntlid": 49, 00:11:23.096 "qid": 0, 00:11:23.096 "state": "enabled", 00:11:23.096 "thread": "nvmf_tgt_poll_group_000", 00:11:23.096 "listen_address": { 00:11:23.096 "trtype": "TCP", 00:11:23.096 "adrfam": "IPv4", 00:11:23.096 "traddr": "10.0.0.2", 00:11:23.096 "trsvcid": "4420" 00:11:23.096 }, 00:11:23.096 "peer_address": { 00:11:23.096 "trtype": "TCP", 00:11:23.096 "adrfam": "IPv4", 00:11:23.096 "traddr": "10.0.0.1", 00:11:23.096 "trsvcid": "57856" 00:11:23.096 }, 00:11:23.096 "auth": { 00:11:23.096 "state": "completed", 00:11:23.096 "digest": "sha384", 00:11:23.096 "dhgroup": "null" 00:11:23.096 } 00:11:23.096 } 00:11:23.096 ]' 00:11:23.096 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.355 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.355 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.355 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:23.355 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.355 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.355 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.355 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.613 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:11:24.180 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.180 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:24.180 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.180 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.180 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.180 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.180 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:24.180 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.450 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.451 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.026 00:11:25.026 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.026 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.026 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.026 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.026 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.026 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.026 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.026 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.026 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.026 { 00:11:25.026 "cntlid": 51, 00:11:25.026 "qid": 0, 00:11:25.026 "state": "enabled", 00:11:25.026 "thread": "nvmf_tgt_poll_group_000", 00:11:25.026 "listen_address": { 00:11:25.026 "trtype": "TCP", 00:11:25.026 "adrfam": "IPv4", 00:11:25.026 "traddr": "10.0.0.2", 00:11:25.026 "trsvcid": "4420" 00:11:25.026 }, 00:11:25.026 "peer_address": { 00:11:25.026 "trtype": "TCP", 00:11:25.026 "adrfam": "IPv4", 00:11:25.026 "traddr": "10.0.0.1", 00:11:25.026 "trsvcid": "57870" 00:11:25.026 }, 00:11:25.026 "auth": { 00:11:25.026 "state": "completed", 00:11:25.026 "digest": "sha384", 00:11:25.026 "dhgroup": "null" 00:11:25.026 } 00:11:25.026 } 00:11:25.026 ]' 00:11:25.026 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.285 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.285 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.285 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:25.285 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.285 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.285 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.285 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.545 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.480 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.047 00:11:27.047 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.047 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.047 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.047 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.047 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.047 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.047 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.047 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.047 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.047 { 00:11:27.047 "cntlid": 53, 00:11:27.047 "qid": 0, 00:11:27.047 "state": "enabled", 00:11:27.047 "thread": "nvmf_tgt_poll_group_000", 00:11:27.047 "listen_address": { 00:11:27.047 "trtype": "TCP", 00:11:27.047 "adrfam": "IPv4", 00:11:27.047 "traddr": "10.0.0.2", 00:11:27.047 "trsvcid": "4420" 00:11:27.047 }, 00:11:27.047 "peer_address": { 00:11:27.047 "trtype": "TCP", 00:11:27.047 "adrfam": "IPv4", 00:11:27.047 "traddr": "10.0.0.1", 00:11:27.047 "trsvcid": "57894" 00:11:27.047 }, 00:11:27.047 "auth": { 00:11:27.047 "state": "completed", 00:11:27.047 "digest": "sha384", 00:11:27.047 "dhgroup": "null" 00:11:27.047 } 00:11:27.047 } 00:11:27.047 ]' 00:11:27.047 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.335 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.335 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.335 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:27.335 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.335 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.335 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.335 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.593 13:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:28.530 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:29.098 00:11:29.098 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.098 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.098 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.098 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.098 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.098 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.098 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.098 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.098 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.098 { 00:11:29.098 "cntlid": 55, 00:11:29.098 "qid": 0, 00:11:29.098 "state": "enabled", 00:11:29.098 "thread": "nvmf_tgt_poll_group_000", 00:11:29.098 "listen_address": { 00:11:29.098 "trtype": "TCP", 00:11:29.098 "adrfam": "IPv4", 00:11:29.098 "traddr": "10.0.0.2", 00:11:29.098 "trsvcid": "4420" 00:11:29.098 }, 00:11:29.098 "peer_address": { 00:11:29.098 "trtype": "TCP", 00:11:29.098 "adrfam": "IPv4", 00:11:29.098 "traddr": "10.0.0.1", 00:11:29.098 "trsvcid": "57930" 00:11:29.098 }, 00:11:29.098 "auth": { 00:11:29.098 "state": "completed", 00:11:29.098 "digest": "sha384", 00:11:29.098 "dhgroup": "null" 00:11:29.098 } 00:11:29.098 } 00:11:29.098 ]' 00:11:29.098 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.356 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.356 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.356 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:29.356 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.356 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.356 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.356 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.617 13:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:30.574 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.575 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.883 00:11:31.141 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.141 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.141 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.398 { 00:11:31.398 "cntlid": 57, 00:11:31.398 "qid": 0, 00:11:31.398 "state": "enabled", 00:11:31.398 "thread": "nvmf_tgt_poll_group_000", 00:11:31.398 "listen_address": { 00:11:31.398 "trtype": "TCP", 00:11:31.398 "adrfam": "IPv4", 00:11:31.398 "traddr": "10.0.0.2", 00:11:31.398 "trsvcid": "4420" 00:11:31.398 }, 00:11:31.398 "peer_address": { 00:11:31.398 "trtype": "TCP", 00:11:31.398 "adrfam": "IPv4", 00:11:31.398 "traddr": "10.0.0.1", 00:11:31.398 "trsvcid": "57964" 00:11:31.398 }, 00:11:31.398 "auth": { 00:11:31.398 "state": "completed", 00:11:31.398 "digest": "sha384", 00:11:31.398 "dhgroup": "ffdhe2048" 00:11:31.398 } 00:11:31.398 } 00:11:31.398 ]' 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.398 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.656 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:11:32.221 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.221 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:32.221 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.221 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.479 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.479 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.479 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:32.479 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:32.479 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:32.479 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.479 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.479 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:32.480 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:32.480 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.480 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.480 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.480 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.480 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.480 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.480 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.045 00:11:33.045 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.045 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.045 13:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.304 { 00:11:33.304 "cntlid": 59, 00:11:33.304 "qid": 0, 00:11:33.304 "state": "enabled", 00:11:33.304 "thread": "nvmf_tgt_poll_group_000", 00:11:33.304 "listen_address": { 00:11:33.304 "trtype": "TCP", 00:11:33.304 "adrfam": "IPv4", 00:11:33.304 "traddr": "10.0.0.2", 00:11:33.304 "trsvcid": "4420" 00:11:33.304 }, 00:11:33.304 "peer_address": { 00:11:33.304 "trtype": "TCP", 00:11:33.304 "adrfam": "IPv4", 00:11:33.304 "traddr": "10.0.0.1", 00:11:33.304 "trsvcid": "35754" 00:11:33.304 }, 00:11:33.304 "auth": { 00:11:33.304 "state": "completed", 00:11:33.304 "digest": "sha384", 00:11:33.304 "dhgroup": "ffdhe2048" 00:11:33.304 } 00:11:33.304 } 00:11:33.304 ]' 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:33.304 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.562 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.562 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.562 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.820 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:11:34.385 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.385 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:34.385 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.385 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.385 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.385 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.385 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:34.385 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.642 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.898 00:11:34.898 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.898 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.898 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.155 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.155 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.155 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.155 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.155 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.155 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.155 { 00:11:35.155 "cntlid": 61, 00:11:35.155 "qid": 0, 00:11:35.155 "state": "enabled", 00:11:35.155 "thread": "nvmf_tgt_poll_group_000", 00:11:35.155 "listen_address": { 00:11:35.155 "trtype": "TCP", 00:11:35.155 "adrfam": "IPv4", 00:11:35.155 "traddr": "10.0.0.2", 00:11:35.155 "trsvcid": "4420" 00:11:35.155 }, 00:11:35.155 "peer_address": { 00:11:35.155 "trtype": "TCP", 00:11:35.155 "adrfam": "IPv4", 00:11:35.155 "traddr": "10.0.0.1", 00:11:35.155 "trsvcid": "35774" 00:11:35.155 }, 00:11:35.155 "auth": { 00:11:35.155 "state": "completed", 00:11:35.155 "digest": "sha384", 00:11:35.155 "dhgroup": "ffdhe2048" 00:11:35.155 } 00:11:35.155 } 00:11:35.155 ]' 00:11:35.155 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.413 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.413 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.413 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:35.413 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.413 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.413 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.413 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.671 13:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:11:36.237 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.238 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:36.238 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.238 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.238 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.238 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.238 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:36.238 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.496 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.754 00:11:37.015 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.015 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.015 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.015 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.015 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.015 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.015 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.279 { 00:11:37.279 "cntlid": 63, 00:11:37.279 "qid": 0, 00:11:37.279 "state": "enabled", 00:11:37.279 "thread": "nvmf_tgt_poll_group_000", 00:11:37.279 "listen_address": { 00:11:37.279 "trtype": "TCP", 00:11:37.279 "adrfam": "IPv4", 00:11:37.279 "traddr": "10.0.0.2", 00:11:37.279 "trsvcid": "4420" 00:11:37.279 }, 00:11:37.279 "peer_address": { 00:11:37.279 "trtype": "TCP", 00:11:37.279 "adrfam": "IPv4", 00:11:37.279 "traddr": "10.0.0.1", 00:11:37.279 "trsvcid": "35814" 00:11:37.279 }, 00:11:37.279 "auth": { 00:11:37.279 "state": "completed", 00:11:37.279 "digest": "sha384", 00:11:37.279 "dhgroup": "ffdhe2048" 00:11:37.279 } 00:11:37.279 } 00:11:37.279 ]' 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.279 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.560 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:11:38.127 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.127 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:38.127 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.127 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.127 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.127 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.127 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.128 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:38.128 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.386 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.953 00:11:38.953 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.953 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.953 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.211 { 00:11:39.211 "cntlid": 65, 00:11:39.211 "qid": 0, 00:11:39.211 "state": "enabled", 00:11:39.211 "thread": "nvmf_tgt_poll_group_000", 00:11:39.211 "listen_address": { 00:11:39.211 "trtype": "TCP", 00:11:39.211 "adrfam": "IPv4", 00:11:39.211 "traddr": "10.0.0.2", 00:11:39.211 "trsvcid": "4420" 00:11:39.211 }, 00:11:39.211 "peer_address": { 00:11:39.211 "trtype": "TCP", 00:11:39.211 "adrfam": "IPv4", 00:11:39.211 "traddr": "10.0.0.1", 00:11:39.211 "trsvcid": "35846" 00:11:39.211 }, 00:11:39.211 "auth": { 00:11:39.211 "state": "completed", 00:11:39.211 "digest": "sha384", 00:11:39.211 "dhgroup": "ffdhe3072" 00:11:39.211 } 00:11:39.211 } 00:11:39.211 ]' 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.211 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.469 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.405 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.973 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.973 { 00:11:40.973 "cntlid": 67, 00:11:40.973 "qid": 0, 00:11:40.973 "state": "enabled", 00:11:40.973 "thread": "nvmf_tgt_poll_group_000", 00:11:40.973 "listen_address": { 00:11:40.973 "trtype": "TCP", 00:11:40.973 "adrfam": "IPv4", 00:11:40.973 "traddr": "10.0.0.2", 00:11:40.973 "trsvcid": "4420" 00:11:40.973 }, 00:11:40.973 "peer_address": { 00:11:40.973 "trtype": "TCP", 00:11:40.973 "adrfam": "IPv4", 00:11:40.973 "traddr": "10.0.0.1", 00:11:40.973 "trsvcid": "35856" 00:11:40.973 }, 00:11:40.973 "auth": { 00:11:40.973 "state": "completed", 00:11:40.973 "digest": "sha384", 00:11:40.973 "dhgroup": "ffdhe3072" 00:11:40.973 } 00:11:40.973 } 00:11:40.973 ]' 00:11:40.973 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.231 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.231 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.231 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:41.231 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.231 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.231 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.231 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.490 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:11:42.055 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.055 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:42.055 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.055 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.055 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.056 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.056 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:42.056 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.313 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.878 00:11:42.878 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.878 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.878 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.136 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.136 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.136 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.136 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.136 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.136 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.136 { 00:11:43.136 "cntlid": 69, 00:11:43.136 "qid": 0, 00:11:43.136 "state": "enabled", 00:11:43.136 "thread": "nvmf_tgt_poll_group_000", 00:11:43.136 "listen_address": { 00:11:43.136 "trtype": "TCP", 00:11:43.136 "adrfam": "IPv4", 00:11:43.136 "traddr": "10.0.0.2", 00:11:43.136 "trsvcid": "4420" 00:11:43.136 }, 00:11:43.136 "peer_address": { 00:11:43.136 "trtype": "TCP", 00:11:43.136 "adrfam": "IPv4", 00:11:43.136 "traddr": "10.0.0.1", 00:11:43.136 "trsvcid": "55408" 00:11:43.136 }, 00:11:43.136 "auth": { 00:11:43.136 "state": "completed", 00:11:43.136 "digest": "sha384", 00:11:43.136 "dhgroup": "ffdhe3072" 00:11:43.136 } 00:11:43.136 } 00:11:43.136 ]' 00:11:43.136 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.136 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.136 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.136 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:43.136 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.136 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.136 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.136 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.394 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:11:44.330 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.330 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:44.330 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.330 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.330 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.330 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.330 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:44.330 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.588 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.858 00:11:44.858 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.858 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.858 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.127 { 00:11:45.127 "cntlid": 71, 00:11:45.127 "qid": 0, 00:11:45.127 "state": "enabled", 00:11:45.127 "thread": "nvmf_tgt_poll_group_000", 00:11:45.127 "listen_address": { 00:11:45.127 "trtype": "TCP", 00:11:45.127 "adrfam": "IPv4", 00:11:45.127 "traddr": "10.0.0.2", 00:11:45.127 "trsvcid": "4420" 00:11:45.127 }, 00:11:45.127 "peer_address": { 00:11:45.127 "trtype": "TCP", 00:11:45.127 "adrfam": "IPv4", 00:11:45.127 "traddr": "10.0.0.1", 00:11:45.127 "trsvcid": "55450" 00:11:45.127 }, 00:11:45.127 "auth": { 00:11:45.127 "state": "completed", 00:11:45.127 "digest": "sha384", 00:11:45.127 "dhgroup": "ffdhe3072" 00:11:45.127 } 00:11:45.127 } 00:11:45.127 ]' 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.127 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.385 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:45.385 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.385 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.385 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.385 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.642 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:46.206 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.465 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.032 00:11:47.032 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.032 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.032 13:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.291 { 00:11:47.291 "cntlid": 73, 00:11:47.291 "qid": 0, 00:11:47.291 "state": "enabled", 00:11:47.291 "thread": "nvmf_tgt_poll_group_000", 00:11:47.291 "listen_address": { 00:11:47.291 "trtype": "TCP", 00:11:47.291 "adrfam": "IPv4", 00:11:47.291 "traddr": "10.0.0.2", 00:11:47.291 "trsvcid": "4420" 00:11:47.291 }, 00:11:47.291 "peer_address": { 00:11:47.291 "trtype": "TCP", 00:11:47.291 "adrfam": "IPv4", 00:11:47.291 "traddr": "10.0.0.1", 00:11:47.291 "trsvcid": "55470" 00:11:47.291 }, 00:11:47.291 "auth": { 00:11:47.291 "state": "completed", 00:11:47.291 "digest": "sha384", 00:11:47.291 "dhgroup": "ffdhe4096" 00:11:47.291 } 00:11:47.291 } 00:11:47.291 ]' 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.291 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.551 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.487 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.053 00:11:49.053 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.053 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.053 13:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.311 { 00:11:49.311 "cntlid": 75, 00:11:49.311 "qid": 0, 00:11:49.311 "state": "enabled", 00:11:49.311 "thread": "nvmf_tgt_poll_group_000", 00:11:49.311 "listen_address": { 00:11:49.311 "trtype": "TCP", 00:11:49.311 "adrfam": "IPv4", 00:11:49.311 "traddr": "10.0.0.2", 00:11:49.311 "trsvcid": "4420" 00:11:49.311 }, 00:11:49.311 "peer_address": { 00:11:49.311 "trtype": "TCP", 00:11:49.311 "adrfam": "IPv4", 00:11:49.311 "traddr": "10.0.0.1", 00:11:49.311 "trsvcid": "55494" 00:11:49.311 }, 00:11:49.311 "auth": { 00:11:49.311 "state": "completed", 00:11:49.311 "digest": "sha384", 00:11:49.311 "dhgroup": "ffdhe4096" 00:11:49.311 } 00:11:49.311 } 00:11:49.311 ]' 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.311 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.876 13:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:11:50.442 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.442 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:50.442 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.442 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.442 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.442 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.442 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.442 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.700 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.958 00:11:50.958 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.958 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.958 13:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.525 { 00:11:51.525 "cntlid": 77, 00:11:51.525 "qid": 0, 00:11:51.525 "state": "enabled", 00:11:51.525 "thread": "nvmf_tgt_poll_group_000", 00:11:51.525 "listen_address": { 00:11:51.525 "trtype": "TCP", 00:11:51.525 "adrfam": "IPv4", 00:11:51.525 "traddr": "10.0.0.2", 00:11:51.525 "trsvcid": "4420" 00:11:51.525 }, 00:11:51.525 "peer_address": { 00:11:51.525 "trtype": "TCP", 00:11:51.525 "adrfam": "IPv4", 00:11:51.525 "traddr": "10.0.0.1", 00:11:51.525 "trsvcid": "55526" 00:11:51.525 }, 00:11:51.525 "auth": { 00:11:51.525 "state": "completed", 00:11:51.525 "digest": "sha384", 00:11:51.525 "dhgroup": "ffdhe4096" 00:11:51.525 } 00:11:51.525 } 00:11:51.525 ]' 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.525 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.784 13:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:11:52.720 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.721 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.286 00:11:53.286 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.287 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.287 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.544 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.544 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.544 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.544 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.544 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.544 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.544 { 00:11:53.544 "cntlid": 79, 00:11:53.544 "qid": 0, 00:11:53.544 "state": "enabled", 00:11:53.544 "thread": "nvmf_tgt_poll_group_000", 00:11:53.544 "listen_address": { 00:11:53.544 "trtype": "TCP", 00:11:53.544 "adrfam": "IPv4", 00:11:53.544 "traddr": "10.0.0.2", 00:11:53.544 "trsvcid": "4420" 00:11:53.544 }, 00:11:53.544 "peer_address": { 00:11:53.544 "trtype": "TCP", 00:11:53.544 "adrfam": "IPv4", 00:11:53.544 "traddr": "10.0.0.1", 00:11:53.544 "trsvcid": "50310" 00:11:53.544 }, 00:11:53.544 "auth": { 00:11:53.544 "state": "completed", 00:11:53.544 "digest": "sha384", 00:11:53.544 "dhgroup": "ffdhe4096" 00:11:53.544 } 00:11:53.544 } 00:11:53.544 ]' 00:11:53.544 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.545 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.545 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.545 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:53.545 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.545 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.545 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.545 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.802 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:54.736 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.995 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.261 00:11:55.527 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.527 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.527 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.793 { 00:11:55.793 "cntlid": 81, 00:11:55.793 "qid": 0, 00:11:55.793 "state": "enabled", 00:11:55.793 "thread": "nvmf_tgt_poll_group_000", 00:11:55.793 "listen_address": { 00:11:55.793 "trtype": "TCP", 00:11:55.793 "adrfam": "IPv4", 00:11:55.793 "traddr": "10.0.0.2", 00:11:55.793 "trsvcid": "4420" 00:11:55.793 }, 00:11:55.793 "peer_address": { 00:11:55.793 "trtype": "TCP", 00:11:55.793 "adrfam": "IPv4", 00:11:55.793 "traddr": "10.0.0.1", 00:11:55.793 "trsvcid": "50332" 00:11:55.793 }, 00:11:55.793 "auth": { 00:11:55.793 "state": "completed", 00:11:55.793 "digest": "sha384", 00:11:55.793 "dhgroup": "ffdhe6144" 00:11:55.793 } 00:11:55.793 } 00:11:55.793 ]' 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.061 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.046 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.615 00:11:57.615 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.615 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.615 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.873 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.873 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.873 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.873 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.873 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.873 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.873 { 00:11:57.873 "cntlid": 83, 00:11:57.873 "qid": 0, 00:11:57.873 "state": "enabled", 00:11:57.873 "thread": "nvmf_tgt_poll_group_000", 00:11:57.873 "listen_address": { 00:11:57.873 "trtype": "TCP", 00:11:57.873 "adrfam": "IPv4", 00:11:57.873 "traddr": "10.0.0.2", 00:11:57.873 "trsvcid": "4420" 00:11:57.873 }, 00:11:57.873 "peer_address": { 00:11:57.873 "trtype": "TCP", 00:11:57.873 "adrfam": "IPv4", 00:11:57.873 "traddr": "10.0.0.1", 00:11:57.873 "trsvcid": "50352" 00:11:57.873 }, 00:11:57.873 "auth": { 00:11:57.873 "state": "completed", 00:11:57.873 "digest": "sha384", 00:11:57.873 "dhgroup": "ffdhe6144" 00:11:57.873 } 00:11:57.873 } 00:11:57.873 ]' 00:11:57.873 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.874 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.874 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.874 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:57.874 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.132 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.132 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.132 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.391 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:11:58.958 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.958 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:11:58.958 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.958 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.958 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.958 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.958 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:58.958 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.217 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.784 00:11:59.784 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.784 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.784 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.043 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.043 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.043 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.043 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.043 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.043 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.043 { 00:12:00.043 "cntlid": 85, 00:12:00.043 "qid": 0, 00:12:00.043 "state": "enabled", 00:12:00.043 "thread": "nvmf_tgt_poll_group_000", 00:12:00.043 "listen_address": { 00:12:00.043 "trtype": "TCP", 00:12:00.043 "adrfam": "IPv4", 00:12:00.043 "traddr": "10.0.0.2", 00:12:00.043 "trsvcid": "4420" 00:12:00.043 }, 00:12:00.043 "peer_address": { 00:12:00.043 "trtype": "TCP", 00:12:00.043 "adrfam": "IPv4", 00:12:00.043 "traddr": "10.0.0.1", 00:12:00.043 "trsvcid": "50376" 00:12:00.043 }, 00:12:00.043 "auth": { 00:12:00.043 "state": "completed", 00:12:00.043 "digest": "sha384", 00:12:00.043 "dhgroup": "ffdhe6144" 00:12:00.043 } 00:12:00.043 } 00:12:00.043 ]' 00:12:00.043 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.302 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.302 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.302 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:00.302 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.302 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.302 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.302 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.560 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:01.495 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.496 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:12:01.496 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.496 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.496 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.496 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.496 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.062 00:12:02.062 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.062 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.062 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.320 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.320 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.320 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.320 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.320 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.320 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.320 { 00:12:02.320 "cntlid": 87, 00:12:02.320 "qid": 0, 00:12:02.320 "state": "enabled", 00:12:02.320 "thread": "nvmf_tgt_poll_group_000", 00:12:02.320 "listen_address": { 00:12:02.320 "trtype": "TCP", 00:12:02.320 "adrfam": "IPv4", 00:12:02.320 "traddr": "10.0.0.2", 00:12:02.320 "trsvcid": "4420" 00:12:02.320 }, 00:12:02.320 "peer_address": { 00:12:02.321 "trtype": "TCP", 00:12:02.321 "adrfam": "IPv4", 00:12:02.321 "traddr": "10.0.0.1", 00:12:02.321 "trsvcid": "50400" 00:12:02.321 }, 00:12:02.321 "auth": { 00:12:02.321 "state": "completed", 00:12:02.321 "digest": "sha384", 00:12:02.321 "dhgroup": "ffdhe6144" 00:12:02.321 } 00:12:02.321 } 00:12:02.321 ]' 00:12:02.321 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.321 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.321 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.321 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:02.321 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.321 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.321 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.321 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.645 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.581 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.147 00:12:04.405 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.405 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.405 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.662 { 00:12:04.662 "cntlid": 89, 00:12:04.662 "qid": 0, 00:12:04.662 "state": "enabled", 00:12:04.662 "thread": "nvmf_tgt_poll_group_000", 00:12:04.662 "listen_address": { 00:12:04.662 "trtype": "TCP", 00:12:04.662 "adrfam": "IPv4", 00:12:04.662 "traddr": "10.0.0.2", 00:12:04.662 "trsvcid": "4420" 00:12:04.662 }, 00:12:04.662 "peer_address": { 00:12:04.662 "trtype": "TCP", 00:12:04.662 "adrfam": "IPv4", 00:12:04.662 "traddr": "10.0.0.1", 00:12:04.662 "trsvcid": "40430" 00:12:04.662 }, 00:12:04.662 "auth": { 00:12:04.662 "state": "completed", 00:12:04.662 "digest": "sha384", 00:12:04.662 "dhgroup": "ffdhe8192" 00:12:04.662 } 00:12:04.662 } 00:12:04.662 ]' 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.662 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.920 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:12:05.856 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.856 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:05.856 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.856 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.856 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.856 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.856 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:05.856 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.115 13:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.684 00:12:06.684 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.684 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.684 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.943 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.943 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.943 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.943 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.943 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.943 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.943 { 00:12:06.943 "cntlid": 91, 00:12:06.943 "qid": 0, 00:12:06.943 "state": "enabled", 00:12:06.943 "thread": "nvmf_tgt_poll_group_000", 00:12:06.943 "listen_address": { 00:12:06.943 "trtype": "TCP", 00:12:06.943 "adrfam": "IPv4", 00:12:06.943 "traddr": "10.0.0.2", 00:12:06.943 "trsvcid": "4420" 00:12:06.943 }, 00:12:06.943 "peer_address": { 00:12:06.943 "trtype": "TCP", 00:12:06.943 "adrfam": "IPv4", 00:12:06.943 "traddr": "10.0.0.1", 00:12:06.943 "trsvcid": "40448" 00:12:06.943 }, 00:12:06.943 "auth": { 00:12:06.943 "state": "completed", 00:12:06.943 "digest": "sha384", 00:12:06.943 "dhgroup": "ffdhe8192" 00:12:06.943 } 00:12:06.943 } 00:12:06.943 ]' 00:12:06.943 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.254 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.254 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.254 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:07.254 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.254 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.254 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.254 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.513 13:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:12:08.448 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.448 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:08.448 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.448 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.448 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.448 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.448 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:08.448 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.706 13:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.272 00:12:09.272 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.272 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.272 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.530 { 00:12:09.530 "cntlid": 93, 00:12:09.530 "qid": 0, 00:12:09.530 "state": "enabled", 00:12:09.530 "thread": "nvmf_tgt_poll_group_000", 00:12:09.530 "listen_address": { 00:12:09.530 "trtype": "TCP", 00:12:09.530 "adrfam": "IPv4", 00:12:09.530 "traddr": "10.0.0.2", 00:12:09.530 "trsvcid": "4420" 00:12:09.530 }, 00:12:09.530 "peer_address": { 00:12:09.530 "trtype": "TCP", 00:12:09.530 "adrfam": "IPv4", 00:12:09.530 "traddr": "10.0.0.1", 00:12:09.530 "trsvcid": "40476" 00:12:09.530 }, 00:12:09.530 "auth": { 00:12:09.530 "state": "completed", 00:12:09.530 "digest": "sha384", 00:12:09.530 "dhgroup": "ffdhe8192" 00:12:09.530 } 00:12:09.530 } 00:12:09.530 ]' 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:09.530 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.819 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.819 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.819 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.819 13:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.754 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.755 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.755 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:11.321 00:12:11.580 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.580 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.580 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.839 { 00:12:11.839 "cntlid": 95, 00:12:11.839 "qid": 0, 00:12:11.839 "state": "enabled", 00:12:11.839 "thread": "nvmf_tgt_poll_group_000", 00:12:11.839 "listen_address": { 00:12:11.839 "trtype": "TCP", 00:12:11.839 "adrfam": "IPv4", 00:12:11.839 "traddr": "10.0.0.2", 00:12:11.839 "trsvcid": "4420" 00:12:11.839 }, 00:12:11.839 "peer_address": { 00:12:11.839 "trtype": "TCP", 00:12:11.839 "adrfam": "IPv4", 00:12:11.839 "traddr": "10.0.0.1", 00:12:11.839 "trsvcid": "40506" 00:12:11.839 }, 00:12:11.839 "auth": { 00:12:11.839 "state": "completed", 00:12:11.839 "digest": "sha384", 00:12:11.839 "dhgroup": "ffdhe8192" 00:12:11.839 } 00:12:11.839 } 00:12:11.839 ]' 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.839 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.097 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:13.030 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.030 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.031 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.596 00:12:13.597 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.597 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.597 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.597 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.597 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.597 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.597 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.855 { 00:12:13.855 "cntlid": 97, 00:12:13.855 "qid": 0, 00:12:13.855 "state": "enabled", 00:12:13.855 "thread": "nvmf_tgt_poll_group_000", 00:12:13.855 "listen_address": { 00:12:13.855 "trtype": "TCP", 00:12:13.855 "adrfam": "IPv4", 00:12:13.855 "traddr": "10.0.0.2", 00:12:13.855 "trsvcid": "4420" 00:12:13.855 }, 00:12:13.855 "peer_address": { 00:12:13.855 "trtype": "TCP", 00:12:13.855 "adrfam": "IPv4", 00:12:13.855 "traddr": "10.0.0.1", 00:12:13.855 "trsvcid": "54862" 00:12:13.855 }, 00:12:13.855 "auth": { 00:12:13.855 "state": "completed", 00:12:13.855 "digest": "sha512", 00:12:13.855 "dhgroup": "null" 00:12:13.855 } 00:12:13.855 } 00:12:13.855 ]' 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.855 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.113 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:12:15.049 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.049 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:15.049 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.049 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.049 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.049 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:15.049 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.308 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.567 00:12:15.567 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.567 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.567 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.825 { 00:12:15.825 "cntlid": 99, 00:12:15.825 "qid": 0, 00:12:15.825 "state": "enabled", 00:12:15.825 "thread": "nvmf_tgt_poll_group_000", 00:12:15.825 "listen_address": { 00:12:15.825 "trtype": "TCP", 00:12:15.825 "adrfam": "IPv4", 00:12:15.825 "traddr": "10.0.0.2", 00:12:15.825 "trsvcid": "4420" 00:12:15.825 }, 00:12:15.825 "peer_address": { 00:12:15.825 "trtype": "TCP", 00:12:15.825 "adrfam": "IPv4", 00:12:15.825 "traddr": "10.0.0.1", 00:12:15.825 "trsvcid": "54882" 00:12:15.825 }, 00:12:15.825 "auth": { 00:12:15.825 "state": "completed", 00:12:15.825 "digest": "sha512", 00:12:15.825 "dhgroup": "null" 00:12:15.825 } 00:12:15.825 } 00:12:15.825 ]' 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.825 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.391 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:12:16.958 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.958 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:16.959 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.959 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.959 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.959 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.959 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:16.959 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.218 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.786 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.786 { 00:12:17.786 "cntlid": 101, 00:12:17.786 "qid": 0, 00:12:17.786 "state": "enabled", 00:12:17.786 "thread": "nvmf_tgt_poll_group_000", 00:12:17.786 "listen_address": { 00:12:17.786 "trtype": "TCP", 00:12:17.786 "adrfam": "IPv4", 00:12:17.786 "traddr": "10.0.0.2", 00:12:17.786 "trsvcid": "4420" 00:12:17.786 }, 00:12:17.786 "peer_address": { 00:12:17.786 "trtype": "TCP", 00:12:17.786 "adrfam": "IPv4", 00:12:17.786 "traddr": "10.0.0.1", 00:12:17.786 "trsvcid": "54892" 00:12:17.786 }, 00:12:17.786 "auth": { 00:12:17.786 "state": "completed", 00:12:17.786 "digest": "sha512", 00:12:17.786 "dhgroup": "null" 00:12:17.786 } 00:12:17.786 } 00:12:17.786 ]' 00:12:17.786 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.045 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.045 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.045 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:18.045 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.045 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.045 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.045 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.304 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:12:19.239 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.239 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:19.239 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.239 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.239 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.239 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.239 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:19.240 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.240 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.810 00:12:19.810 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.810 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.810 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.069 { 00:12:20.069 "cntlid": 103, 00:12:20.069 "qid": 0, 00:12:20.069 "state": "enabled", 00:12:20.069 "thread": "nvmf_tgt_poll_group_000", 00:12:20.069 "listen_address": { 00:12:20.069 "trtype": "TCP", 00:12:20.069 "adrfam": "IPv4", 00:12:20.069 "traddr": "10.0.0.2", 00:12:20.069 "trsvcid": "4420" 00:12:20.069 }, 00:12:20.069 "peer_address": { 00:12:20.069 "trtype": "TCP", 00:12:20.069 "adrfam": "IPv4", 00:12:20.069 "traddr": "10.0.0.1", 00:12:20.069 "trsvcid": "54900" 00:12:20.069 }, 00:12:20.069 "auth": { 00:12:20.069 "state": "completed", 00:12:20.069 "digest": "sha512", 00:12:20.069 "dhgroup": "null" 00:12:20.069 } 00:12:20.069 } 00:12:20.069 ]' 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:20.069 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.069 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.069 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.069 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.328 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:21.263 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:21.263 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:21.263 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.263 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.264 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:21.264 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:21.264 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.264 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.264 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.264 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.523 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.523 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.523 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.781 00:12:21.781 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.781 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.781 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.040 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.040 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.040 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.040 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.040 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.040 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.040 { 00:12:22.040 "cntlid": 105, 00:12:22.040 "qid": 0, 00:12:22.040 "state": "enabled", 00:12:22.040 "thread": "nvmf_tgt_poll_group_000", 00:12:22.040 "listen_address": { 00:12:22.040 "trtype": "TCP", 00:12:22.040 "adrfam": "IPv4", 00:12:22.040 "traddr": "10.0.0.2", 00:12:22.040 "trsvcid": "4420" 00:12:22.040 }, 00:12:22.040 "peer_address": { 00:12:22.040 "trtype": "TCP", 00:12:22.040 "adrfam": "IPv4", 00:12:22.040 "traddr": "10.0.0.1", 00:12:22.040 "trsvcid": "54920" 00:12:22.040 }, 00:12:22.040 "auth": { 00:12:22.040 "state": "completed", 00:12:22.040 "digest": "sha512", 00:12:22.040 "dhgroup": "ffdhe2048" 00:12:22.040 } 00:12:22.040 } 00:12:22.040 ]' 00:12:22.040 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.040 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.040 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.299 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:22.299 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.299 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.299 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.299 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.557 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:12:23.124 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.124 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:23.124 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.383 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.383 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.383 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.383 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:23.383 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.643 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.902 00:12:23.902 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.902 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.902 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.161 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.161 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.161 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.161 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.161 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.161 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.161 { 00:12:24.161 "cntlid": 107, 00:12:24.161 "qid": 0, 00:12:24.161 "state": "enabled", 00:12:24.161 "thread": "nvmf_tgt_poll_group_000", 00:12:24.161 "listen_address": { 00:12:24.161 "trtype": "TCP", 00:12:24.161 "adrfam": "IPv4", 00:12:24.161 "traddr": "10.0.0.2", 00:12:24.161 "trsvcid": "4420" 00:12:24.161 }, 00:12:24.161 "peer_address": { 00:12:24.162 "trtype": "TCP", 00:12:24.162 "adrfam": "IPv4", 00:12:24.162 "traddr": "10.0.0.1", 00:12:24.162 "trsvcid": "51036" 00:12:24.162 }, 00:12:24.162 "auth": { 00:12:24.162 "state": "completed", 00:12:24.162 "digest": "sha512", 00:12:24.162 "dhgroup": "ffdhe2048" 00:12:24.162 } 00:12:24.162 } 00:12:24.162 ]' 00:12:24.162 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.422 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.422 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.422 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:24.422 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.422 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.422 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.422 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.681 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:25.655 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:25.656 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.656 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.656 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.656 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.656 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.656 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.656 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.956 00:12:25.956 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.956 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.956 13:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.523 { 00:12:26.523 "cntlid": 109, 00:12:26.523 "qid": 0, 00:12:26.523 "state": "enabled", 00:12:26.523 "thread": "nvmf_tgt_poll_group_000", 00:12:26.523 "listen_address": { 00:12:26.523 "trtype": "TCP", 00:12:26.523 "adrfam": "IPv4", 00:12:26.523 "traddr": "10.0.0.2", 00:12:26.523 "trsvcid": "4420" 00:12:26.523 }, 00:12:26.523 "peer_address": { 00:12:26.523 "trtype": "TCP", 00:12:26.523 "adrfam": "IPv4", 00:12:26.523 "traddr": "10.0.0.1", 00:12:26.523 "trsvcid": "51074" 00:12:26.523 }, 00:12:26.523 "auth": { 00:12:26.523 "state": "completed", 00:12:26.523 "digest": "sha512", 00:12:26.523 "dhgroup": "ffdhe2048" 00:12:26.523 } 00:12:26.523 } 00:12:26.523 ]' 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.523 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.782 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:12:27.350 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.350 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:27.350 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.350 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.608 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.608 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.608 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:27.608 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:27.866 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:28.125 00:12:28.125 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.125 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.125 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.384 { 00:12:28.384 "cntlid": 111, 00:12:28.384 "qid": 0, 00:12:28.384 "state": "enabled", 00:12:28.384 "thread": "nvmf_tgt_poll_group_000", 00:12:28.384 "listen_address": { 00:12:28.384 "trtype": "TCP", 00:12:28.384 "adrfam": "IPv4", 00:12:28.384 "traddr": "10.0.0.2", 00:12:28.384 "trsvcid": "4420" 00:12:28.384 }, 00:12:28.384 "peer_address": { 00:12:28.384 "trtype": "TCP", 00:12:28.384 "adrfam": "IPv4", 00:12:28.384 "traddr": "10.0.0.1", 00:12:28.384 "trsvcid": "51110" 00:12:28.384 }, 00:12:28.384 "auth": { 00:12:28.384 "state": "completed", 00:12:28.384 "digest": "sha512", 00:12:28.384 "dhgroup": "ffdhe2048" 00:12:28.384 } 00:12:28.384 } 00:12:28.384 ]' 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.384 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:28.642 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.642 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.642 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.642 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.900 13:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:12:29.466 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.725 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:29.725 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.725 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.725 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.725 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.725 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.725 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:29.725 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.984 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.243 00:12:30.243 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.243 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.243 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.503 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.503 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.503 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.503 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.503 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.503 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.503 { 00:12:30.503 "cntlid": 113, 00:12:30.503 "qid": 0, 00:12:30.503 "state": "enabled", 00:12:30.503 "thread": "nvmf_tgt_poll_group_000", 00:12:30.503 "listen_address": { 00:12:30.503 "trtype": "TCP", 00:12:30.503 "adrfam": "IPv4", 00:12:30.503 "traddr": "10.0.0.2", 00:12:30.503 "trsvcid": "4420" 00:12:30.503 }, 00:12:30.503 "peer_address": { 00:12:30.503 "trtype": "TCP", 00:12:30.503 "adrfam": "IPv4", 00:12:30.503 "traddr": "10.0.0.1", 00:12:30.503 "trsvcid": "51138" 00:12:30.503 }, 00:12:30.503 "auth": { 00:12:30.503 "state": "completed", 00:12:30.503 "digest": "sha512", 00:12:30.503 "dhgroup": "ffdhe3072" 00:12:30.503 } 00:12:30.503 } 00:12:30.503 ]' 00:12:30.503 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.761 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.761 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.761 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:30.761 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.761 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.761 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.761 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.020 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.956 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.523 00:12:32.523 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.523 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.523 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.782 { 00:12:32.782 "cntlid": 115, 00:12:32.782 "qid": 0, 00:12:32.782 "state": "enabled", 00:12:32.782 "thread": "nvmf_tgt_poll_group_000", 00:12:32.782 "listen_address": { 00:12:32.782 "trtype": "TCP", 00:12:32.782 "adrfam": "IPv4", 00:12:32.782 "traddr": "10.0.0.2", 00:12:32.782 "trsvcid": "4420" 00:12:32.782 }, 00:12:32.782 "peer_address": { 00:12:32.782 "trtype": "TCP", 00:12:32.782 "adrfam": "IPv4", 00:12:32.782 "traddr": "10.0.0.1", 00:12:32.782 "trsvcid": "51164" 00:12:32.782 }, 00:12:32.782 "auth": { 00:12:32.782 "state": "completed", 00:12:32.782 "digest": "sha512", 00:12:32.782 "dhgroup": "ffdhe3072" 00:12:32.782 } 00:12:32.782 } 00:12:32.782 ]' 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.782 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.350 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:12:33.918 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.918 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:33.918 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.918 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.918 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.918 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.918 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:33.918 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:34.177 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:34.177 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.177 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.177 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:34.177 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:34.177 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.178 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.178 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.178 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.178 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.178 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.178 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.745 00:12:34.745 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.745 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.745 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.004 { 00:12:35.004 "cntlid": 117, 00:12:35.004 "qid": 0, 00:12:35.004 "state": "enabled", 00:12:35.004 "thread": "nvmf_tgt_poll_group_000", 00:12:35.004 "listen_address": { 00:12:35.004 "trtype": "TCP", 00:12:35.004 "adrfam": "IPv4", 00:12:35.004 "traddr": "10.0.0.2", 00:12:35.004 "trsvcid": "4420" 00:12:35.004 }, 00:12:35.004 "peer_address": { 00:12:35.004 "trtype": "TCP", 00:12:35.004 "adrfam": "IPv4", 00:12:35.004 "traddr": "10.0.0.1", 00:12:35.004 "trsvcid": "50458" 00:12:35.004 }, 00:12:35.004 "auth": { 00:12:35.004 "state": "completed", 00:12:35.004 "digest": "sha512", 00:12:35.004 "dhgroup": "ffdhe3072" 00:12:35.004 } 00:12:35.004 } 00:12:35.004 ]' 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.004 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.262 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:12:36.198 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.198 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:36.198 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.198 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.198 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.198 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.198 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:36.198 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.456 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.457 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:36.457 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:36.727 00:12:36.727 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.727 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.727 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.986 { 00:12:36.986 "cntlid": 119, 00:12:36.986 "qid": 0, 00:12:36.986 "state": "enabled", 00:12:36.986 "thread": "nvmf_tgt_poll_group_000", 00:12:36.986 "listen_address": { 00:12:36.986 "trtype": "TCP", 00:12:36.986 "adrfam": "IPv4", 00:12:36.986 "traddr": "10.0.0.2", 00:12:36.986 "trsvcid": "4420" 00:12:36.986 }, 00:12:36.986 "peer_address": { 00:12:36.986 "trtype": "TCP", 00:12:36.986 "adrfam": "IPv4", 00:12:36.986 "traddr": "10.0.0.1", 00:12:36.986 "trsvcid": "50482" 00:12:36.986 }, 00:12:36.986 "auth": { 00:12:36.986 "state": "completed", 00:12:36.986 "digest": "sha512", 00:12:36.986 "dhgroup": "ffdhe3072" 00:12:36.986 } 00:12:36.986 } 00:12:36.986 ]' 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.986 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.244 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:37.244 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.244 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.244 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.244 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.502 13:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:12:38.436 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.437 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.003 00:12:39.003 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.003 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.003 13:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.261 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.261 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.261 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.261 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.261 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.261 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.261 { 00:12:39.261 "cntlid": 121, 00:12:39.261 "qid": 0, 00:12:39.261 "state": "enabled", 00:12:39.261 "thread": "nvmf_tgt_poll_group_000", 00:12:39.261 "listen_address": { 00:12:39.261 "trtype": "TCP", 00:12:39.261 "adrfam": "IPv4", 00:12:39.261 "traddr": "10.0.0.2", 00:12:39.261 "trsvcid": "4420" 00:12:39.262 }, 00:12:39.262 "peer_address": { 00:12:39.262 "trtype": "TCP", 00:12:39.262 "adrfam": "IPv4", 00:12:39.262 "traddr": "10.0.0.1", 00:12:39.262 "trsvcid": "50526" 00:12:39.262 }, 00:12:39.262 "auth": { 00:12:39.262 "state": "completed", 00:12:39.262 "digest": "sha512", 00:12:39.262 "dhgroup": "ffdhe4096" 00:12:39.262 } 00:12:39.262 } 00:12:39.262 ]' 00:12:39.262 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.262 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.262 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.262 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:39.262 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.520 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.520 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.520 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.778 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:12:40.345 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.345 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:40.345 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.345 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.603 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.603 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.603 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:40.603 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.862 13:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.121 00:12:41.121 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.121 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.121 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.380 { 00:12:41.380 "cntlid": 123, 00:12:41.380 "qid": 0, 00:12:41.380 "state": "enabled", 00:12:41.380 "thread": "nvmf_tgt_poll_group_000", 00:12:41.380 "listen_address": { 00:12:41.380 "trtype": "TCP", 00:12:41.380 "adrfam": "IPv4", 00:12:41.380 "traddr": "10.0.0.2", 00:12:41.380 "trsvcid": "4420" 00:12:41.380 }, 00:12:41.380 "peer_address": { 00:12:41.380 "trtype": "TCP", 00:12:41.380 "adrfam": "IPv4", 00:12:41.380 "traddr": "10.0.0.1", 00:12:41.380 "trsvcid": "50554" 00:12:41.380 }, 00:12:41.380 "auth": { 00:12:41.380 "state": "completed", 00:12:41.380 "digest": "sha512", 00:12:41.380 "dhgroup": "ffdhe4096" 00:12:41.380 } 00:12:41.380 } 00:12:41.380 ]' 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.380 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.638 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:41.638 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.638 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.638 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.638 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.897 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:12:42.513 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.513 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:42.513 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.513 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.513 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.513 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.514 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:42.514 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.769 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.026 00:12:43.284 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.284 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.284 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.542 { 00:12:43.542 "cntlid": 125, 00:12:43.542 "qid": 0, 00:12:43.542 "state": "enabled", 00:12:43.542 "thread": "nvmf_tgt_poll_group_000", 00:12:43.542 "listen_address": { 00:12:43.542 "trtype": "TCP", 00:12:43.542 "adrfam": "IPv4", 00:12:43.542 "traddr": "10.0.0.2", 00:12:43.542 "trsvcid": "4420" 00:12:43.542 }, 00:12:43.542 "peer_address": { 00:12:43.542 "trtype": "TCP", 00:12:43.542 "adrfam": "IPv4", 00:12:43.542 "traddr": "10.0.0.1", 00:12:43.542 "trsvcid": "33850" 00:12:43.542 }, 00:12:43.542 "auth": { 00:12:43.542 "state": "completed", 00:12:43.542 "digest": "sha512", 00:12:43.542 "dhgroup": "ffdhe4096" 00:12:43.542 } 00:12:43.542 } 00:12:43.542 ]' 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.542 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.111 13:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:12:44.678 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.678 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:44.678 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.678 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.678 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.678 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.678 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:44.678 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.945 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:45.516 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.516 { 00:12:45.516 "cntlid": 127, 00:12:45.516 "qid": 0, 00:12:45.516 "state": "enabled", 00:12:45.516 "thread": "nvmf_tgt_poll_group_000", 00:12:45.516 "listen_address": { 00:12:45.516 "trtype": "TCP", 00:12:45.516 "adrfam": "IPv4", 00:12:45.516 "traddr": "10.0.0.2", 00:12:45.516 "trsvcid": "4420" 00:12:45.516 }, 00:12:45.516 "peer_address": { 00:12:45.516 "trtype": "TCP", 00:12:45.516 "adrfam": "IPv4", 00:12:45.516 "traddr": "10.0.0.1", 00:12:45.516 "trsvcid": "33890" 00:12:45.516 }, 00:12:45.516 "auth": { 00:12:45.516 "state": "completed", 00:12:45.516 "digest": "sha512", 00:12:45.516 "dhgroup": "ffdhe4096" 00:12:45.516 } 00:12:45.516 } 00:12:45.516 ]' 00:12:45.516 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.781 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.781 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.781 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:45.781 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.781 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.781 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.781 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.049 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:46.640 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:46.913 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:46.913 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.913 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.913 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:46.914 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:46.914 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.914 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.914 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.914 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.914 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.914 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.914 13:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.551 00:12:47.551 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.551 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.551 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.809 { 00:12:47.809 "cntlid": 129, 00:12:47.809 "qid": 0, 00:12:47.809 "state": "enabled", 00:12:47.809 "thread": "nvmf_tgt_poll_group_000", 00:12:47.809 "listen_address": { 00:12:47.809 "trtype": "TCP", 00:12:47.809 "adrfam": "IPv4", 00:12:47.809 "traddr": "10.0.0.2", 00:12:47.809 "trsvcid": "4420" 00:12:47.809 }, 00:12:47.809 "peer_address": { 00:12:47.809 "trtype": "TCP", 00:12:47.809 "adrfam": "IPv4", 00:12:47.809 "traddr": "10.0.0.1", 00:12:47.809 "trsvcid": "33920" 00:12:47.809 }, 00:12:47.809 "auth": { 00:12:47.809 "state": "completed", 00:12:47.809 "digest": "sha512", 00:12:47.809 "dhgroup": "ffdhe6144" 00:12:47.809 } 00:12:47.809 } 00:12:47.809 ]' 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.809 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.068 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:12:48.636 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.895 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:48.895 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.895 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.895 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.895 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.895 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:48.895 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.153 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.411 00:12:49.411 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.411 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.411 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.669 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.669 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.669 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.669 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.669 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.669 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.669 { 00:12:49.669 "cntlid": 131, 00:12:49.669 "qid": 0, 00:12:49.669 "state": "enabled", 00:12:49.669 "thread": "nvmf_tgt_poll_group_000", 00:12:49.669 "listen_address": { 00:12:49.669 "trtype": "TCP", 00:12:49.669 "adrfam": "IPv4", 00:12:49.669 "traddr": "10.0.0.2", 00:12:49.669 "trsvcid": "4420" 00:12:49.669 }, 00:12:49.669 "peer_address": { 00:12:49.669 "trtype": "TCP", 00:12:49.669 "adrfam": "IPv4", 00:12:49.669 "traddr": "10.0.0.1", 00:12:49.669 "trsvcid": "33948" 00:12:49.669 }, 00:12:49.669 "auth": { 00:12:49.669 "state": "completed", 00:12:49.669 "digest": "sha512", 00:12:49.669 "dhgroup": "ffdhe6144" 00:12:49.669 } 00:12:49.669 } 00:12:49.669 ]' 00:12:49.669 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.927 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.927 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.927 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:49.927 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.927 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.927 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.927 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.185 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:12:50.753 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.753 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:50.753 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.753 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.753 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.753 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.753 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:50.753 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.011 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.269 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.269 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.269 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.527 00:12:51.527 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.527 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.527 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.786 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.786 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.786 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.786 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.786 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.786 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.786 { 00:12:51.786 "cntlid": 133, 00:12:51.786 "qid": 0, 00:12:51.786 "state": "enabled", 00:12:51.786 "thread": "nvmf_tgt_poll_group_000", 00:12:51.786 "listen_address": { 00:12:51.786 "trtype": "TCP", 00:12:51.786 "adrfam": "IPv4", 00:12:51.786 "traddr": "10.0.0.2", 00:12:51.786 "trsvcid": "4420" 00:12:51.786 }, 00:12:51.786 "peer_address": { 00:12:51.786 "trtype": "TCP", 00:12:51.786 "adrfam": "IPv4", 00:12:51.786 "traddr": "10.0.0.1", 00:12:51.786 "trsvcid": "33976" 00:12:51.786 }, 00:12:51.786 "auth": { 00:12:51.786 "state": "completed", 00:12:51.786 "digest": "sha512", 00:12:51.786 "dhgroup": "ffdhe6144" 00:12:51.786 } 00:12:51.786 } 00:12:51.786 ]' 00:12:51.786 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.044 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.044 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.044 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:52.044 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.044 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.044 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.044 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.303 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:12:52.871 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.871 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:52.871 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.871 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.871 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.871 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.871 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:52.871 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.130 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.389 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.389 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.389 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.646 00:12:53.646 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.646 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.646 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.906 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.906 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.906 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.906 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.906 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.906 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.906 { 00:12:53.906 "cntlid": 135, 00:12:53.906 "qid": 0, 00:12:53.906 "state": "enabled", 00:12:53.906 "thread": "nvmf_tgt_poll_group_000", 00:12:53.906 "listen_address": { 00:12:53.906 "trtype": "TCP", 00:12:53.906 "adrfam": "IPv4", 00:12:53.906 "traddr": "10.0.0.2", 00:12:53.906 "trsvcid": "4420" 00:12:53.906 }, 00:12:53.906 "peer_address": { 00:12:53.906 "trtype": "TCP", 00:12:53.906 "adrfam": "IPv4", 00:12:53.906 "traddr": "10.0.0.1", 00:12:53.906 "trsvcid": "50596" 00:12:53.906 }, 00:12:53.906 "auth": { 00:12:53.906 "state": "completed", 00:12:53.906 "digest": "sha512", 00:12:53.906 "dhgroup": "ffdhe6144" 00:12:53.906 } 00:12:53.906 } 00:12:53.906 ]' 00:12:53.906 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.169 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.169 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.169 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:54.169 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.169 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.169 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.169 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.428 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:55.364 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:55.622 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.623 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.190 00:12:56.190 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.190 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.190 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.448 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.448 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.448 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.448 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.448 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.448 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.448 { 00:12:56.448 "cntlid": 137, 00:12:56.448 "qid": 0, 00:12:56.448 "state": "enabled", 00:12:56.448 "thread": "nvmf_tgt_poll_group_000", 00:12:56.448 "listen_address": { 00:12:56.448 "trtype": "TCP", 00:12:56.448 "adrfam": "IPv4", 00:12:56.448 "traddr": "10.0.0.2", 00:12:56.448 "trsvcid": "4420" 00:12:56.448 }, 00:12:56.448 "peer_address": { 00:12:56.448 "trtype": "TCP", 00:12:56.448 "adrfam": "IPv4", 00:12:56.448 "traddr": "10.0.0.1", 00:12:56.448 "trsvcid": "50636" 00:12:56.448 }, 00:12:56.448 "auth": { 00:12:56.448 "state": "completed", 00:12:56.448 "digest": "sha512", 00:12:56.448 "dhgroup": "ffdhe8192" 00:12:56.448 } 00:12:56.448 } 00:12:56.448 ]' 00:12:56.448 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.707 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.707 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.707 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:56.707 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.707 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.707 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.707 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.966 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.903 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.837 00:12:58.837 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.837 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.837 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.095 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.095 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.096 { 00:12:59.096 "cntlid": 139, 00:12:59.096 "qid": 0, 00:12:59.096 "state": "enabled", 00:12:59.096 "thread": "nvmf_tgt_poll_group_000", 00:12:59.096 "listen_address": { 00:12:59.096 "trtype": "TCP", 00:12:59.096 "adrfam": "IPv4", 00:12:59.096 "traddr": "10.0.0.2", 00:12:59.096 "trsvcid": "4420" 00:12:59.096 }, 00:12:59.096 "peer_address": { 00:12:59.096 "trtype": "TCP", 00:12:59.096 "adrfam": "IPv4", 00:12:59.096 "traddr": "10.0.0.1", 00:12:59.096 "trsvcid": "50672" 00:12:59.096 }, 00:12:59.096 "auth": { 00:12:59.096 "state": "completed", 00:12:59.096 "digest": "sha512", 00:12:59.096 "dhgroup": "ffdhe8192" 00:12:59.096 } 00:12:59.096 } 00:12:59.096 ]' 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:59.096 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.096 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.096 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.096 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.354 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:01:MzBjZTY1MGQ0MTJiYzA4ZWI2MzUyYzEyNzM3NDFhN2MOF0u8: --dhchap-ctrl-secret DHHC-1:02:MDViZWYyMmY4YzRmN2NlZmUyOWU0ZGQyMTU0NWNhMmRlNGUwYTRiY2U1NjYxZTNk/peKUw==: 00:13:00.289 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.289 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:00.289 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.289 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.289 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.289 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.289 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:00.289 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.548 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.115 00:13:01.115 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.115 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.116 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.375 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.375 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.375 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.375 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.375 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.375 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.375 { 00:13:01.375 "cntlid": 141, 00:13:01.375 "qid": 0, 00:13:01.375 "state": "enabled", 00:13:01.375 "thread": "nvmf_tgt_poll_group_000", 00:13:01.375 "listen_address": { 00:13:01.375 "trtype": "TCP", 00:13:01.375 "adrfam": "IPv4", 00:13:01.375 "traddr": "10.0.0.2", 00:13:01.375 "trsvcid": "4420" 00:13:01.375 }, 00:13:01.375 "peer_address": { 00:13:01.375 "trtype": "TCP", 00:13:01.375 "adrfam": "IPv4", 00:13:01.375 "traddr": "10.0.0.1", 00:13:01.375 "trsvcid": "50714" 00:13:01.375 }, 00:13:01.375 "auth": { 00:13:01.375 "state": "completed", 00:13:01.375 "digest": "sha512", 00:13:01.375 "dhgroup": "ffdhe8192" 00:13:01.375 } 00:13:01.375 } 00:13:01.375 ]' 00:13:01.375 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.375 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.633 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.633 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:01.633 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.633 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.633 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.633 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.891 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:02:MWY2ZjZjNWVmMjc0ZTY3YThiMGU2NWEwOWUwZDhhNTMwZDgyMzY4MTIxZjU3NGQwjIQi8A==: --dhchap-ctrl-secret DHHC-1:01:ZjFiOTliNGNiMGJkMmUyMDhhYmFiNDdjZTVkZTU0YTIx/32G: 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.826 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.765 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.765 { 00:13:03.765 "cntlid": 143, 00:13:03.765 "qid": 0, 00:13:03.765 "state": "enabled", 00:13:03.765 "thread": "nvmf_tgt_poll_group_000", 00:13:03.765 "listen_address": { 00:13:03.765 "trtype": "TCP", 00:13:03.765 "adrfam": "IPv4", 00:13:03.765 "traddr": "10.0.0.2", 00:13:03.765 "trsvcid": "4420" 00:13:03.765 }, 00:13:03.765 "peer_address": { 00:13:03.765 "trtype": "TCP", 00:13:03.765 "adrfam": "IPv4", 00:13:03.765 "traddr": "10.0.0.1", 00:13:03.765 "trsvcid": "50324" 00:13:03.765 }, 00:13:03.765 "auth": { 00:13:03.765 "state": "completed", 00:13:03.765 "digest": "sha512", 00:13:03.765 "dhgroup": "ffdhe8192" 00:13:03.765 } 00:13:03.765 } 00:13:03.765 ]' 00:13:03.765 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.026 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.026 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.026 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:04.026 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.026 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.026 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.026 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.284 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:04.850 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.109 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.043 00:13:06.043 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.043 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.043 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.043 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.043 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.043 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.043 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.043 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.043 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.043 { 00:13:06.043 "cntlid": 145, 00:13:06.043 "qid": 0, 00:13:06.043 "state": "enabled", 00:13:06.043 "thread": "nvmf_tgt_poll_group_000", 00:13:06.043 "listen_address": { 00:13:06.043 "trtype": "TCP", 00:13:06.043 "adrfam": "IPv4", 00:13:06.043 "traddr": "10.0.0.2", 00:13:06.043 "trsvcid": "4420" 00:13:06.043 }, 00:13:06.043 "peer_address": { 00:13:06.043 "trtype": "TCP", 00:13:06.043 "adrfam": "IPv4", 00:13:06.043 "traddr": "10.0.0.1", 00:13:06.043 "trsvcid": "50350" 00:13:06.043 }, 00:13:06.043 "auth": { 00:13:06.043 "state": "completed", 00:13:06.043 "digest": "sha512", 00:13:06.043 "dhgroup": "ffdhe8192" 00:13:06.043 } 00:13:06.043 } 00:13:06.043 ]' 00:13:06.043 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.302 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.302 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.302 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:06.302 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.302 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.302 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.302 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.560 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:00:YmRlN2E2MDliZDQzMDExYjhlNThiM2Y4YWMxODgzYTBlZTZlZDdhOGU4OWE0OTg3IOMeBA==: --dhchap-ctrl-secret DHHC-1:03:NWMzNjFlODViNmI5YWNjZDlkY2U1ZDRkYzU1YTk2MDlkMDQ3MDE1OGZhMDQ0YjcyODVlNTUwMjFlYzhlMmU1ZcIFnmw=: 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:07.558 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:08.125 request: 00:13:08.125 { 00:13:08.125 "name": "nvme0", 00:13:08.125 "trtype": "tcp", 00:13:08.125 "traddr": "10.0.0.2", 00:13:08.125 "adrfam": "ipv4", 00:13:08.125 "trsvcid": "4420", 00:13:08.125 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:08.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89", 00:13:08.125 "prchk_reftag": false, 00:13:08.125 "prchk_guard": false, 00:13:08.125 "hdgst": false, 00:13:08.125 "ddgst": false, 00:13:08.125 "dhchap_key": "key2", 00:13:08.125 "method": "bdev_nvme_attach_controller", 00:13:08.125 "req_id": 1 00:13:08.125 } 00:13:08.125 Got JSON-RPC error response 00:13:08.125 response: 00:13:08.125 { 00:13:08.125 "code": -5, 00:13:08.125 "message": "Input/output error" 00:13:08.125 } 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:08.125 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:08.692 request: 00:13:08.692 { 00:13:08.692 "name": "nvme0", 00:13:08.692 "trtype": "tcp", 00:13:08.692 "traddr": "10.0.0.2", 00:13:08.692 "adrfam": "ipv4", 00:13:08.692 "trsvcid": "4420", 00:13:08.692 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:08.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89", 00:13:08.692 "prchk_reftag": false, 00:13:08.692 "prchk_guard": false, 00:13:08.692 "hdgst": false, 00:13:08.692 "ddgst": false, 00:13:08.692 "dhchap_key": "key1", 00:13:08.692 "dhchap_ctrlr_key": "ckey2", 00:13:08.692 "method": "bdev_nvme_attach_controller", 00:13:08.692 "req_id": 1 00:13:08.692 } 00:13:08.692 Got JSON-RPC error response 00:13:08.692 response: 00:13:08.692 { 00:13:08.692 "code": -5, 00:13:08.692 "message": "Input/output error" 00:13:08.692 } 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key1 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.692 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.259 request: 00:13:09.259 { 00:13:09.259 "name": "nvme0", 00:13:09.259 "trtype": "tcp", 00:13:09.259 "traddr": "10.0.0.2", 00:13:09.259 "adrfam": "ipv4", 00:13:09.259 "trsvcid": "4420", 00:13:09.259 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:09.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89", 00:13:09.259 "prchk_reftag": false, 00:13:09.259 "prchk_guard": false, 00:13:09.259 "hdgst": false, 00:13:09.259 "ddgst": false, 00:13:09.259 "dhchap_key": "key1", 00:13:09.259 "dhchap_ctrlr_key": "ckey1", 00:13:09.259 "method": "bdev_nvme_attach_controller", 00:13:09.259 "req_id": 1 00:13:09.259 } 00:13:09.259 Got JSON-RPC error response 00:13:09.259 response: 00:13:09.259 { 00:13:09.259 "code": -5, 00:13:09.259 "message": "Input/output error" 00:13:09.259 } 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 68456 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68456 ']' 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68456 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68456 00:13:09.259 killing process with pid 68456 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68456' 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68456 00:13:09.259 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68456 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71545 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71545 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71545 ']' 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.517 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71545 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71545 ']' 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.891 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:11.149 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:11.715 00:13:11.715 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.715 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.715 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.973 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.973 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.973 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.973 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.243 { 00:13:12.243 "cntlid": 1, 00:13:12.243 "qid": 0, 00:13:12.243 "state": "enabled", 00:13:12.243 "thread": "nvmf_tgt_poll_group_000", 00:13:12.243 "listen_address": { 00:13:12.243 "trtype": "TCP", 00:13:12.243 "adrfam": "IPv4", 00:13:12.243 "traddr": "10.0.0.2", 00:13:12.243 "trsvcid": "4420" 00:13:12.243 }, 00:13:12.243 "peer_address": { 00:13:12.243 "trtype": "TCP", 00:13:12.243 "adrfam": "IPv4", 00:13:12.243 "traddr": "10.0.0.1", 00:13:12.243 "trsvcid": "50398" 00:13:12.243 }, 00:13:12.243 "auth": { 00:13:12.243 "state": "completed", 00:13:12.243 "digest": "sha512", 00:13:12.243 "dhgroup": "ffdhe8192" 00:13:12.243 } 00:13:12.243 } 00:13:12.243 ]' 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.243 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.244 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.530 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid 71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-secret DHHC-1:03:YTczZTY0NjI1NzdmYjQ3YTc2MzZjZWNmMDk3MTIxZTNlM2UwMTllZmE0ZTMxZmY2ZmJkNzczZmVhMzM3ODU2ZAuDCpA=: 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --dhchap-key key3 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.463 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.029 request: 00:13:14.029 { 00:13:14.029 "name": "nvme0", 00:13:14.029 "trtype": "tcp", 00:13:14.029 "traddr": "10.0.0.2", 00:13:14.029 "adrfam": "ipv4", 00:13:14.029 "trsvcid": "4420", 00:13:14.029 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:14.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89", 00:13:14.029 "prchk_reftag": false, 00:13:14.029 "prchk_guard": false, 00:13:14.029 "hdgst": false, 00:13:14.029 "ddgst": false, 00:13:14.029 "dhchap_key": "key3", 00:13:14.029 "method": "bdev_nvme_attach_controller", 00:13:14.029 "req_id": 1 00:13:14.029 } 00:13:14.029 Got JSON-RPC error response 00:13:14.029 response: 00:13:14.029 { 00:13:14.029 "code": -5, 00:13:14.029 "message": "Input/output error" 00:13:14.029 } 00:13:14.029 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:14.029 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.029 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.029 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.029 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:14.029 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:14.029 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:14.029 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:14.287 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.287 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:14.287 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.287 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:14.287 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.288 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:14.288 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.288 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.288 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.545 request: 00:13:14.545 { 00:13:14.545 "name": "nvme0", 00:13:14.545 "trtype": "tcp", 00:13:14.545 "traddr": "10.0.0.2", 00:13:14.545 "adrfam": "ipv4", 00:13:14.545 "trsvcid": "4420", 00:13:14.545 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:14.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89", 00:13:14.545 "prchk_reftag": false, 00:13:14.545 "prchk_guard": false, 00:13:14.545 "hdgst": false, 00:13:14.545 "ddgst": false, 00:13:14.545 "dhchap_key": "key3", 00:13:14.545 "method": "bdev_nvme_attach_controller", 00:13:14.545 "req_id": 1 00:13:14.545 } 00:13:14.545 Got JSON-RPC error response 00:13:14.545 response: 00:13:14.545 { 00:13:14.545 "code": -5, 00:13:14.545 "message": "Input/output error" 00:13:14.545 } 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:14.545 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:14.803 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:15.062 request: 00:13:15.062 { 00:13:15.062 "name": "nvme0", 00:13:15.062 "trtype": "tcp", 00:13:15.062 "traddr": "10.0.0.2", 00:13:15.062 "adrfam": "ipv4", 00:13:15.062 "trsvcid": "4420", 00:13:15.062 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:15.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89", 00:13:15.062 "prchk_reftag": false, 00:13:15.062 "prchk_guard": false, 00:13:15.062 "hdgst": false, 00:13:15.062 "ddgst": false, 00:13:15.062 "dhchap_key": "key0", 00:13:15.062 "dhchap_ctrlr_key": "key1", 00:13:15.062 "method": "bdev_nvme_attach_controller", 00:13:15.062 "req_id": 1 00:13:15.062 } 00:13:15.062 Got JSON-RPC error response 00:13:15.062 response: 00:13:15.062 { 00:13:15.062 "code": -5, 00:13:15.062 "message": "Input/output error" 00:13:15.062 } 00:13:15.062 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:15.062 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:15.062 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:15.062 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:15.062 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:15.062 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:15.320 00:13:15.320 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:15.320 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.320 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:15.578 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.578 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.578 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68492 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68492 ']' 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68492 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68492 00:13:15.836 killing process with pid 68492 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68492' 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68492 00:13:15.836 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68492 00:13:16.402 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.403 rmmod nvme_tcp 00:13:16.403 rmmod nvme_fabrics 00:13:16.403 rmmod nvme_keyring 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71545 ']' 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71545 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71545 ']' 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71545 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71545 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71545' 00:13:16.403 killing process with pid 71545 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71545 00:13:16.403 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71545 00:13:16.660 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.660 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.660 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.660 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.660 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.660 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.660 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.660 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.TJ0 /tmp/spdk.key-sha256.cxP /tmp/spdk.key-sha384.1DL /tmp/spdk.key-sha512.JAE /tmp/spdk.key-sha512.Zhy /tmp/spdk.key-sha384.e71 /tmp/spdk.key-sha256.npU '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:16.918 00:13:16.918 real 2m55.040s 00:13:16.918 user 6m58.929s 00:13:16.918 sys 0m27.511s 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.918 ************************************ 00:13:16.918 END TEST nvmf_auth_target 00:13:16.918 ************************************ 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.918 ************************************ 00:13:16.918 START TEST nvmf_bdevio_no_huge 00:13:16.918 ************************************ 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:16.918 * Looking for test storage... 00:13:16.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.918 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:16.919 Cannot find device "nvmf_tgt_br" 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:16.919 Cannot find device "nvmf_tgt_br2" 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:16.919 Cannot find device "nvmf_tgt_br" 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:16.919 Cannot find device "nvmf_tgt_br2" 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:16.919 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:17.177 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:17.177 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:17.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:13:17.435 00:13:17.435 --- 10.0.0.2 ping statistics --- 00:13:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.435 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:17.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:17.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:13:17.435 00:13:17.435 --- 10.0.0.3 ping statistics --- 00:13:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.435 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:17.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:13:17.435 00:13:17.435 --- 10.0.0.1 ping statistics --- 00:13:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.435 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=71879 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:17.435 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 71879 00:13:17.436 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71879 ']' 00:13:17.436 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.436 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.436 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.436 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.436 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 [2024-07-25 13:56:06.300374] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:17.436 [2024-07-25 13:56:06.300718] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:17.436 [2024-07-25 13:56:06.443560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.694 [2024-07-25 13:56:06.612632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.694 [2024-07-25 13:56:06.612695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.694 [2024-07-25 13:56:06.612710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.694 [2024-07-25 13:56:06.612720] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.694 [2024-07-25 13:56:06.612730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.694 [2024-07-25 13:56:06.612901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:17.694 [2024-07-25 13:56:06.613642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:17.694 [2024-07-25 13:56:06.615339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:17.694 [2024-07-25 13:56:06.615404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.694 [2024-07-25 13:56:06.620797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:18.259 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.259 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:13:18.259 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.259 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.259 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.518 [2024-07-25 13:56:07.315168] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.518 Malloc0 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.518 [2024-07-25 13:56:07.363356] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.518 { 00:13:18.518 "params": { 00:13:18.518 "name": "Nvme$subsystem", 00:13:18.518 "trtype": "$TEST_TRANSPORT", 00:13:18.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.518 "adrfam": "ipv4", 00:13:18.518 "trsvcid": "$NVMF_PORT", 00:13:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.518 "hdgst": ${hdgst:-false}, 00:13:18.518 "ddgst": ${ddgst:-false} 00:13:18.518 }, 00:13:18.518 "method": "bdev_nvme_attach_controller" 00:13:18.518 } 00:13:18.518 EOF 00:13:18.518 )") 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:18.518 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.518 "params": { 00:13:18.518 "name": "Nvme1", 00:13:18.518 "trtype": "tcp", 00:13:18.518 "traddr": "10.0.0.2", 00:13:18.518 "adrfam": "ipv4", 00:13:18.518 "trsvcid": "4420", 00:13:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.518 "hdgst": false, 00:13:18.518 "ddgst": false 00:13:18.518 }, 00:13:18.518 "method": "bdev_nvme_attach_controller" 00:13:18.518 }' 00:13:18.518 [2024-07-25 13:56:07.427473] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:18.518 [2024-07-25 13:56:07.427894] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71915 ] 00:13:18.776 [2024-07-25 13:56:07.583163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.776 [2024-07-25 13:56:07.731330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.776 [2024-07-25 13:56:07.731429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.776 [2024-07-25 13:56:07.731445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.776 [2024-07-25 13:56:07.746892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:19.035 I/O targets: 00:13:19.035 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:19.035 00:13:19.035 00:13:19.035 CUnit - A unit testing framework for C - Version 2.1-3 00:13:19.035 http://cunit.sourceforge.net/ 00:13:19.035 00:13:19.035 00:13:19.035 Suite: bdevio tests on: Nvme1n1 00:13:19.035 Test: blockdev write read block ...passed 00:13:19.035 Test: blockdev write zeroes read block ...passed 00:13:19.035 Test: blockdev write zeroes read no split ...passed 00:13:19.035 Test: blockdev write zeroes read split ...passed 00:13:19.035 Test: blockdev write zeroes read split partial ...passed 00:13:19.035 Test: blockdev reset ...[2024-07-25 13:56:07.969425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:19.035 [2024-07-25 13:56:07.969555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118a870 (9): Bad file descriptor 00:13:19.035 passed 00:13:19.035 Test: blockdev write read 8 blocks ...[2024-07-25 13:56:07.989178] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:19.035 passed 00:13:19.035 Test: blockdev write read size > 128k ...passed 00:13:19.035 Test: blockdev write read invalid size ...passed 00:13:19.035 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:19.035 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:19.035 Test: blockdev write read max offset ...passed 00:13:19.035 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:19.035 Test: blockdev writev readv 8 blocks ...passed 00:13:19.035 Test: blockdev writev readv 30 x 1block ...passed 00:13:19.035 Test: blockdev writev readv block ...passed 00:13:19.035 Test: blockdev writev readv size > 128k ...passed 00:13:19.035 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:19.035 Test: blockdev comparev and writev ...[2024-07-25 13:56:07.998810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.035 [2024-07-25 13:56:07.998870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:07.998891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.036 [2024-07-25 13:56:07.998903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:07.999430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.036 [2024-07-25 13:56:07.999453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:07.999736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.036 [2024-07-25 13:56:07.999750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:08.000050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.036 [2024-07-25 13:56:08.000074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:08.000104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.036 [2024-07-25 13:56:08.000116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:08.000479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.036 [2024-07-25 13:56:08.000502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:08.000520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:19.036 [2024-07-25 13:56:08.000530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:19.036 passed 00:13:19.036 Test: blockdev nvme passthru rw ...passed 00:13:19.036 Test: blockdev nvme passthru vendor specific ...passed 00:13:19.036 Test: blockdev nvme admin passthru ...[2024-07-25 13:56:08.001641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.036 [2024-07-25 13:56:08.001731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:08.001850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.036 [2024-07-25 13:56:08.001873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:08.001982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.036 [2024-07-25 13:56:08.002003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:19.036 [2024-07-25 13:56:08.002111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:19.036 [2024-07-25 13:56:08.002132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:19.036 passed 00:13:19.036 Test: blockdev copy ...passed 00:13:19.036 00:13:19.036 Run Summary: Type Total Ran Passed Failed Inactive 00:13:19.036 suites 1 1 n/a 0 0 00:13:19.036 tests 23 23 23 0 0 00:13:19.036 asserts 152 152 152 0 n/a 00:13:19.036 00:13:19.036 Elapsed time = 0.197 seconds 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.601 rmmod nvme_tcp 00:13:19.601 rmmod nvme_fabrics 00:13:19.601 rmmod nvme_keyring 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 71879 ']' 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 71879 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71879 ']' 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71879 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71879 00:13:19.601 killing process with pid 71879 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71879' 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71879 00:13:19.601 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71879 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:20.166 00:13:20.166 real 0m3.293s 00:13:20.166 user 0m10.817s 00:13:20.166 sys 0m1.281s 00:13:20.166 ************************************ 00:13:20.166 END TEST nvmf_bdevio_no_huge 00:13:20.166 ************************************ 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.166 ************************************ 00:13:20.166 START TEST nvmf_tls 00:13:20.166 ************************************ 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:20.166 * Looking for test storage... 00:13:20.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.166 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.432 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:20.433 Cannot find device "nvmf_tgt_br" 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:20.433 Cannot find device "nvmf_tgt_br2" 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:20.433 Cannot find device "nvmf_tgt_br" 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:20.433 Cannot find device "nvmf_tgt_br2" 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:20.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:20.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:20.433 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:20.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:13:20.692 00:13:20.692 --- 10.0.0.2 ping statistics --- 00:13:20.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.692 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:20.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:20.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:13:20.692 00:13:20.692 --- 10.0.0.3 ping statistics --- 00:13:20.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.692 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:13:20.692 00:13:20.692 --- 10.0.0.1 ping statistics --- 00:13:20.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.692 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.692 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72099 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72099 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72099 ']' 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.693 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.693 [2024-07-25 13:56:09.638555] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:20.693 [2024-07-25 13:56:09.638656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.951 [2024-07-25 13:56:09.777474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.951 [2024-07-25 13:56:09.907468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.951 [2024-07-25 13:56:09.907524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.951 [2024-07-25 13:56:09.907536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.951 [2024-07-25 13:56:09.907545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.951 [2024-07-25 13:56:09.907552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.951 [2024-07-25 13:56:09.907580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:21.886 true 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:21.886 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:22.451 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:22.451 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:22.451 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:22.451 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:22.451 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:22.709 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:22.709 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:22.709 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:22.967 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:22.967 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:23.534 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:23.534 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:23.534 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:23.534 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:23.534 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:23.534 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:23.534 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:23.792 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:23.792 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:24.051 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:24.051 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:24.051 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:24.309 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:24.309 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:24.567 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.pkkWiAzws6 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.OWcxyqJAMk 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.pkkWiAzws6 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.OWcxyqJAMk 00:13:24.824 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:25.083 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:25.341 [2024-07-25 13:56:14.305461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.341 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.pkkWiAzws6 00:13:25.341 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pkkWiAzws6 00:13:25.341 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:25.599 [2024-07-25 13:56:14.604546] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.599 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:25.857 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:26.115 [2024-07-25 13:56:15.084656] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:26.115 [2024-07-25 13:56:15.084902] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.115 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:26.373 malloc0 00:13:26.373 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:26.939 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pkkWiAzws6 00:13:26.939 [2024-07-25 13:56:15.900765] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:26.939 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pkkWiAzws6 00:13:39.149 Initializing NVMe Controllers 00:13:39.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:39.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:39.149 Initialization complete. Launching workers. 00:13:39.149 ======================================================== 00:13:39.149 Latency(us) 00:13:39.149 Device Information : IOPS MiB/s Average min max 00:13:39.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9551.29 37.31 6702.39 1109.79 14768.66 00:13:39.149 ======================================================== 00:13:39.149 Total : 9551.29 37.31 6702.39 1109.79 14768.66 00:13:39.149 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pkkWiAzws6 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pkkWiAzws6' 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72335 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72335 /var/tmp/bdevperf.sock 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72335 ']' 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.149 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.149 [2024-07-25 13:56:26.212511] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:39.149 [2024-07-25 13:56:26.212857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72335 ] 00:13:39.149 [2024-07-25 13:56:26.348740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.149 [2024-07-25 13:56:26.475582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.149 [2024-07-25 13:56:26.529124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.149 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.149 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:39.149 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pkkWiAzws6 00:13:39.149 [2024-07-25 13:56:27.477238] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:39.149 [2024-07-25 13:56:27.477378] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:39.149 TLSTESTn1 00:13:39.149 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:39.149 Running I/O for 10 seconds... 00:13:49.153 00:13:49.153 Latency(us) 00:13:49.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.153 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:49.153 Verification LBA range: start 0x0 length 0x2000 00:13:49.153 TLSTESTn1 : 10.02 3981.12 15.55 0.00 0.00 32088.52 6404.65 32410.53 00:13:49.153 =================================================================================================================== 00:13:49.153 Total : 3981.12 15.55 0.00 0.00 32088.52 6404.65 32410.53 00:13:49.153 0 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72335 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72335 ']' 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72335 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72335 00:13:49.153 killing process with pid 72335 00:13:49.153 Received shutdown signal, test time was about 10.000000 seconds 00:13:49.153 00:13:49.153 Latency(us) 00:13:49.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.153 =================================================================================================================== 00:13:49.153 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72335' 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72335 00:13:49.153 [2024-07-25 13:56:37.738111] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72335 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OWcxyqJAMk 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OWcxyqJAMk 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:49.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OWcxyqJAMk 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OWcxyqJAMk' 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72469 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72469 /var/tmp/bdevperf.sock 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72469 ']' 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.153 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.153 [2024-07-25 13:56:38.029677] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:49.153 [2024-07-25 13:56:38.030031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72469 ] 00:13:49.153 [2024-07-25 13:56:38.172763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.411 [2024-07-25 13:56:38.300978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.411 [2024-07-25 13:56:38.357339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OWcxyqJAMk 00:13:50.346 [2024-07-25 13:56:39.250802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:50.346 [2024-07-25 13:56:39.251519] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:50.346 [2024-07-25 13:56:39.257677] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:50.346 [2024-07-25 13:56:39.258552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16401f0 (107): Transport endpoint is not connected 00:13:50.346 [2024-07-25 13:56:39.259544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16401f0 (9): Bad file descriptor 00:13:50.346 [2024-07-25 13:56:39.260540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:50.346 [2024-07-25 13:56:39.260572] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:50.346 [2024-07-25 13:56:39.260590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:50.346 request: 00:13:50.346 { 00:13:50.346 "name": "TLSTEST", 00:13:50.346 "trtype": "tcp", 00:13:50.346 "traddr": "10.0.0.2", 00:13:50.346 "adrfam": "ipv4", 00:13:50.346 "trsvcid": "4420", 00:13:50.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:50.346 "prchk_reftag": false, 00:13:50.346 "prchk_guard": false, 00:13:50.346 "hdgst": false, 00:13:50.346 "ddgst": false, 00:13:50.346 "psk": "/tmp/tmp.OWcxyqJAMk", 00:13:50.346 "method": "bdev_nvme_attach_controller", 00:13:50.346 "req_id": 1 00:13:50.346 } 00:13:50.346 Got JSON-RPC error response 00:13:50.346 response: 00:13:50.346 { 00:13:50.346 "code": -5, 00:13:50.346 "message": "Input/output error" 00:13:50.346 } 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72469 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72469 ']' 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72469 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72469 00:13:50.346 killing process with pid 72469 00:13:50.346 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.346 00:13:50.346 Latency(us) 00:13:50.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.346 =================================================================================================================== 00:13:50.346 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72469' 00:13:50.346 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72469 00:13:50.347 [2024-07-25 13:56:39.313743] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:50.347 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72469 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pkkWiAzws6 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pkkWiAzws6 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:50.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pkkWiAzws6 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pkkWiAzws6' 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72501 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72501 /var/tmp/bdevperf.sock 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72501 ']' 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.605 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.605 [2024-07-25 13:56:39.599841] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:50.605 [2024-07-25 13:56:39.600211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72501 ] 00:13:50.863 [2024-07-25 13:56:39.744312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.863 [2024-07-25 13:56:39.861919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.122 [2024-07-25 13:56:39.914803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.688 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.688 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:51.688 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.pkkWiAzws6 00:13:52.255 [2024-07-25 13:56:41.034218] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:52.255 [2024-07-25 13:56:41.035105] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:52.255 [2024-07-25 13:56:41.040367] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:52.255 [2024-07-25 13:56:41.040655] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:52.255 [2024-07-25 13:56:41.040717] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:52.255 [2024-07-25 13:56:41.041042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b01f0 (107): Transport endpoint is not connected 00:13:52.255 [2024-07-25 13:56:41.042045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b01f0 (9): Bad file descriptor 00:13:52.255 [2024-07-25 13:56:41.043040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:52.255 [2024-07-25 13:56:41.043071] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:52.255 [2024-07-25 13:56:41.043089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:52.255 request: 00:13:52.255 { 00:13:52.255 "name": "TLSTEST", 00:13:52.255 "trtype": "tcp", 00:13:52.255 "traddr": "10.0.0.2", 00:13:52.255 "adrfam": "ipv4", 00:13:52.255 "trsvcid": "4420", 00:13:52.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.255 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:52.255 "prchk_reftag": false, 00:13:52.255 "prchk_guard": false, 00:13:52.255 "hdgst": false, 00:13:52.255 "ddgst": false, 00:13:52.255 "psk": "/tmp/tmp.pkkWiAzws6", 00:13:52.255 "method": "bdev_nvme_attach_controller", 00:13:52.255 "req_id": 1 00:13:52.255 } 00:13:52.255 Got JSON-RPC error response 00:13:52.255 response: 00:13:52.255 { 00:13:52.255 "code": -5, 00:13:52.255 "message": "Input/output error" 00:13:52.255 } 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72501 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72501 ']' 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72501 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72501 00:13:52.255 killing process with pid 72501 00:13:52.255 Received shutdown signal, test time was about 10.000000 seconds 00:13:52.255 00:13:52.255 Latency(us) 00:13:52.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.255 =================================================================================================================== 00:13:52.255 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72501' 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72501 00:13:52.255 [2024-07-25 13:56:41.094353] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:52.255 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72501 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pkkWiAzws6 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pkkWiAzws6 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pkkWiAzws6 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pkkWiAzws6' 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72524 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72524 /var/tmp/bdevperf.sock 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72524 ']' 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.514 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.514 [2024-07-25 13:56:41.396735] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:52.514 [2024-07-25 13:56:41.397229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72524 ] 00:13:52.772 [2024-07-25 13:56:41.546920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.772 [2024-07-25 13:56:41.666933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.772 [2024-07-25 13:56:41.719915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:53.707 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.707 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:53.707 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pkkWiAzws6 00:13:53.707 [2024-07-25 13:56:42.627265] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:53.707 [2024-07-25 13:56:42.627416] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:53.707 [2024-07-25 13:56:42.638159] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:53.707 [2024-07-25 13:56:42.638217] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:53.707 [2024-07-25 13:56:42.638296] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:53.707 [2024-07-25 13:56:42.639173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14121f0 (107): Transport endpoint is not connected 00:13:53.707 [2024-07-25 13:56:42.640156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14121f0 (9): Bad file descriptor 00:13:53.707 [2024-07-25 13:56:42.641151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:53.707 [2024-07-25 13:56:42.641182] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:53.707 [2024-07-25 13:56:42.641199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:53.707 request: 00:13:53.707 { 00:13:53.707 "name": "TLSTEST", 00:13:53.707 "trtype": "tcp", 00:13:53.707 "traddr": "10.0.0.2", 00:13:53.707 "adrfam": "ipv4", 00:13:53.707 "trsvcid": "4420", 00:13:53.707 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:53.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.707 "prchk_reftag": false, 00:13:53.707 "prchk_guard": false, 00:13:53.707 "hdgst": false, 00:13:53.707 "ddgst": false, 00:13:53.707 "psk": "/tmp/tmp.pkkWiAzws6", 00:13:53.707 "method": "bdev_nvme_attach_controller", 00:13:53.708 "req_id": 1 00:13:53.708 } 00:13:53.708 Got JSON-RPC error response 00:13:53.708 response: 00:13:53.708 { 00:13:53.708 "code": -5, 00:13:53.708 "message": "Input/output error" 00:13:53.708 } 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72524 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72524 ']' 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72524 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72524 00:13:53.708 killing process with pid 72524 00:13:53.708 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.708 00:13:53.708 Latency(us) 00:13:53.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.708 =================================================================================================================== 00:13:53.708 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72524' 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72524 00:13:53.708 [2024-07-25 13:56:42.703051] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:53.708 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72524 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.966 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72552 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72552 /var/tmp/bdevperf.sock 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72552 ']' 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.967 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.967 [2024-07-25 13:56:42.979195] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:53.967 [2024-07-25 13:56:42.979510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72552 ] 00:13:54.225 [2024-07-25 13:56:43.121377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.225 [2024-07-25 13:56:43.241152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.484 [2024-07-25 13:56:43.295472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:55.050 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.050 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:55.050 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:55.309 [2024-07-25 13:56:44.169865] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:55.309 [2024-07-25 13:56:44.171886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24acc00 (9): Bad file descriptor 00:13:55.309 [2024-07-25 13:56:44.172882] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:55.309 [2024-07-25 13:56:44.172929] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:55.309 [2024-07-25 13:56:44.172957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:55.309 request: 00:13:55.309 { 00:13:55.309 "name": "TLSTEST", 00:13:55.309 "trtype": "tcp", 00:13:55.309 "traddr": "10.0.0.2", 00:13:55.309 "adrfam": "ipv4", 00:13:55.309 "trsvcid": "4420", 00:13:55.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.309 "prchk_reftag": false, 00:13:55.309 "prchk_guard": false, 00:13:55.309 "hdgst": false, 00:13:55.309 "ddgst": false, 00:13:55.309 "method": "bdev_nvme_attach_controller", 00:13:55.309 "req_id": 1 00:13:55.309 } 00:13:55.309 Got JSON-RPC error response 00:13:55.309 response: 00:13:55.309 { 00:13:55.309 "code": -5, 00:13:55.309 "message": "Input/output error" 00:13:55.309 } 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72552 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72552 ']' 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72552 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72552 00:13:55.309 killing process with pid 72552 00:13:55.309 Received shutdown signal, test time was about 10.000000 seconds 00:13:55.309 00:13:55.309 Latency(us) 00:13:55.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.309 =================================================================================================================== 00:13:55.309 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72552' 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72552 00:13:55.309 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72552 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 72099 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72099 ']' 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72099 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72099 00:13:55.568 killing process with pid 72099 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72099' 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72099 00:13:55.568 [2024-07-25 13:56:44.450702] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:55.568 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72099 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.BEv6v6GMa2 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.BEv6v6GMa2 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.827 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72595 00:13:55.828 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72595 00:13:55.828 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:55.828 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72595 ']' 00:13:55.828 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.828 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.828 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.828 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.828 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.828 [2024-07-25 13:56:44.826090] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:55.828 [2024-07-25 13:56:44.826212] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.085 [2024-07-25 13:56:44.966893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.085 [2024-07-25 13:56:45.083390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.085 [2024-07-25 13:56:45.083455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.085 [2024-07-25 13:56:45.083467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.085 [2024-07-25 13:56:45.083476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.085 [2024-07-25 13:56:45.083483] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.085 [2024-07-25 13:56:45.083516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.379 [2024-07-25 13:56:45.136202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.BEv6v6GMa2 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BEv6v6GMa2 00:13:56.956 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:57.214 [2024-07-25 13:56:46.111698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.214 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:57.472 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:57.729 [2024-07-25 13:56:46.639812] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:57.729 [2024-07-25 13:56:46.640039] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.729 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:57.987 malloc0 00:13:57.987 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:58.245 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BEv6v6GMa2 00:13:58.504 [2024-07-25 13:56:47.419139] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BEv6v6GMa2 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BEv6v6GMa2' 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72648 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72648 /var/tmp/bdevperf.sock 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72648 ']' 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:58.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.504 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.504 [2024-07-25 13:56:47.490727] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:58.504 [2024-07-25 13:56:47.491101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72648 ] 00:13:58.762 [2024-07-25 13:56:47.625984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.762 [2024-07-25 13:56:47.776485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.020 [2024-07-25 13:56:47.829399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:59.586 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.586 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:59.586 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BEv6v6GMa2 00:13:59.845 [2024-07-25 13:56:48.840316] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:59.845 [2024-07-25 13:56:48.840439] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:00.103 TLSTESTn1 00:14:00.103 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:00.103 Running I/O for 10 seconds... 00:14:10.073 00:14:10.073 Latency(us) 00:14:10.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.073 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:10.073 Verification LBA range: start 0x0 length 0x2000 00:14:10.073 TLSTESTn1 : 10.03 3839.19 15.00 0.00 0.00 33267.26 7447.27 261190.75 00:14:10.073 =================================================================================================================== 00:14:10.073 Total : 3839.19 15.00 0.00 0.00 33267.26 7447.27 261190.75 00:14:10.073 0 00:14:10.073 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:10.073 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72648 00:14:10.073 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72648 ']' 00:14:10.073 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72648 00:14:10.073 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:10.331 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:10.331 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72648 00:14:10.331 killing process with pid 72648 00:14:10.331 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.331 00:14:10.331 Latency(us) 00:14:10.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.331 =================================================================================================================== 00:14:10.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.331 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:10.331 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:10.331 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72648' 00:14:10.331 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72648 00:14:10.331 [2024-07-25 13:56:59.126092] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:10.331 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72648 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.BEv6v6GMa2 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BEv6v6GMa2 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BEv6v6GMa2 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:10.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BEv6v6GMa2 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BEv6v6GMa2' 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72788 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72788 /var/tmp/bdevperf.sock 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72788 ']' 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.589 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.589 [2024-07-25 13:56:59.434918] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:10.589 [2024-07-25 13:56:59.435382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72788 ] 00:14:10.589 [2024-07-25 13:56:59.575952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.847 [2024-07-25 13:56:59.698260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.847 [2024-07-25 13:56:59.752608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:11.781 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.781 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:11.781 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BEv6v6GMa2 00:14:11.781 [2024-07-25 13:57:00.733512] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.781 [2024-07-25 13:57:00.734031] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:11.781 [2024-07-25 13:57:00.734045] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.BEv6v6GMa2 00:14:11.781 request: 00:14:11.781 { 00:14:11.781 "name": "TLSTEST", 00:14:11.781 "trtype": "tcp", 00:14:11.781 "traddr": "10.0.0.2", 00:14:11.781 "adrfam": "ipv4", 00:14:11.781 "trsvcid": "4420", 00:14:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.781 "prchk_reftag": false, 00:14:11.781 "prchk_guard": false, 00:14:11.781 "hdgst": false, 00:14:11.781 "ddgst": false, 00:14:11.781 "psk": "/tmp/tmp.BEv6v6GMa2", 00:14:11.781 "method": "bdev_nvme_attach_controller", 00:14:11.781 "req_id": 1 00:14:11.782 } 00:14:11.782 Got JSON-RPC error response 00:14:11.782 response: 00:14:11.782 { 00:14:11.782 "code": -1, 00:14:11.782 "message": "Operation not permitted" 00:14:11.782 } 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72788 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72788 ']' 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72788 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72788 00:14:11.782 killing process with pid 72788 00:14:11.782 Received shutdown signal, test time was about 10.000000 seconds 00:14:11.782 00:14:11.782 Latency(us) 00:14:11.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.782 =================================================================================================================== 00:14:11.782 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72788' 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72788 00:14:11.782 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72788 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 72595 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72595 ']' 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72595 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72595 00:14:12.039 killing process with pid 72595 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:12.039 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72595' 00:14:12.040 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72595 00:14:12.040 [2024-07-25 13:57:01.034440] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:12.040 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72595 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72821 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72821 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72821 ']' 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.298 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.555 [2024-07-25 13:57:01.341050] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:12.555 [2024-07-25 13:57:01.341166] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.555 [2024-07-25 13:57:01.482725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.813 [2024-07-25 13:57:01.598697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.813 [2024-07-25 13:57:01.598764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.813 [2024-07-25 13:57:01.598777] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.813 [2024-07-25 13:57:01.598785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.813 [2024-07-25 13:57:01.598793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.813 [2024-07-25 13:57:01.598821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.813 [2024-07-25 13:57:01.651453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.BEv6v6GMa2 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BEv6v6GMa2 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.BEv6v6GMa2 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BEv6v6GMa2 00:14:13.379 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:13.637 [2024-07-25 13:57:02.630443] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.637 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.894 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:14.154 [2024-07-25 13:57:03.138538] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:14.154 [2024-07-25 13:57:03.138776] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.154 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:14.416 malloc0 00:14:14.416 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:14.676 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BEv6v6GMa2 00:14:14.935 [2024-07-25 13:57:03.861783] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:14.935 [2024-07-25 13:57:03.861843] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:14.935 [2024-07-25 13:57:03.861879] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:14.935 request: 00:14:14.935 { 00:14:14.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.935 "host": "nqn.2016-06.io.spdk:host1", 00:14:14.935 "psk": "/tmp/tmp.BEv6v6GMa2", 00:14:14.935 "method": "nvmf_subsystem_add_host", 00:14:14.935 "req_id": 1 00:14:14.935 } 00:14:14.935 Got JSON-RPC error response 00:14:14.935 response: 00:14:14.935 { 00:14:14.935 "code": -32603, 00:14:14.935 "message": "Internal error" 00:14:14.935 } 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 72821 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72821 ']' 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72821 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72821 00:14:14.935 killing process with pid 72821 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72821' 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72821 00:14:14.935 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72821 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.BEv6v6GMa2 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72889 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72889 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72889 ']' 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.194 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.452 [2024-07-25 13:57:04.231154] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:15.452 [2024-07-25 13:57:04.231290] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.452 [2024-07-25 13:57:04.372536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.711 [2024-07-25 13:57:04.510378] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.711 [2024-07-25 13:57:04.510442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.711 [2024-07-25 13:57:04.510454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.711 [2024-07-25 13:57:04.510463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.711 [2024-07-25 13:57:04.510471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.711 [2024-07-25 13:57:04.510501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.711 [2024-07-25 13:57:04.563192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.BEv6v6GMa2 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BEv6v6GMa2 00:14:16.278 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:16.536 [2024-07-25 13:57:05.530969] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.536 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:16.793 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:17.051 [2024-07-25 13:57:06.063032] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:17.051 [2024-07-25 13:57:06.063275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.309 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:17.309 malloc0 00:14:17.309 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:17.874 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BEv6v6GMa2 00:14:17.875 [2024-07-25 13:57:06.874441] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=72938 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 72938 /var/tmp/bdevperf.sock 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72938 ']' 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.875 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.132 [2024-07-25 13:57:06.936973] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:18.132 [2024-07-25 13:57:06.937070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72938 ] 00:14:18.132 [2024-07-25 13:57:07.086850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.425 [2024-07-25 13:57:07.239920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.425 [2024-07-25 13:57:07.296731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.991 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.991 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:18.991 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BEv6v6GMa2 00:14:19.249 [2024-07-25 13:57:08.213331] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.249 [2024-07-25 13:57:08.213472] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:19.507 TLSTESTn1 00:14:19.507 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:19.765 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:19.765 "subsystems": [ 00:14:19.765 { 00:14:19.765 "subsystem": "keyring", 00:14:19.765 "config": [] 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "subsystem": "iobuf", 00:14:19.765 "config": [ 00:14:19.765 { 00:14:19.765 "method": "iobuf_set_options", 00:14:19.765 "params": { 00:14:19.765 "small_pool_count": 8192, 00:14:19.765 "large_pool_count": 1024, 00:14:19.765 "small_bufsize": 8192, 00:14:19.765 "large_bufsize": 135168 00:14:19.765 } 00:14:19.765 } 00:14:19.765 ] 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "subsystem": "sock", 00:14:19.765 "config": [ 00:14:19.765 { 00:14:19.765 "method": "sock_set_default_impl", 00:14:19.765 "params": { 00:14:19.765 "impl_name": "uring" 00:14:19.765 } 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "method": "sock_impl_set_options", 00:14:19.765 "params": { 00:14:19.765 "impl_name": "ssl", 00:14:19.765 "recv_buf_size": 4096, 00:14:19.765 "send_buf_size": 4096, 00:14:19.765 "enable_recv_pipe": true, 00:14:19.765 "enable_quickack": false, 00:14:19.765 "enable_placement_id": 0, 00:14:19.765 "enable_zerocopy_send_server": true, 00:14:19.765 "enable_zerocopy_send_client": false, 00:14:19.765 "zerocopy_threshold": 0, 00:14:19.765 "tls_version": 0, 00:14:19.765 "enable_ktls": false 00:14:19.765 } 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "method": "sock_impl_set_options", 00:14:19.765 "params": { 00:14:19.765 "impl_name": "posix", 00:14:19.765 "recv_buf_size": 2097152, 00:14:19.765 "send_buf_size": 2097152, 00:14:19.765 "enable_recv_pipe": true, 00:14:19.765 "enable_quickack": false, 00:14:19.765 "enable_placement_id": 0, 00:14:19.765 "enable_zerocopy_send_server": true, 00:14:19.765 "enable_zerocopy_send_client": false, 00:14:19.765 "zerocopy_threshold": 0, 00:14:19.765 "tls_version": 0, 00:14:19.765 "enable_ktls": false 00:14:19.765 } 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "method": "sock_impl_set_options", 00:14:19.765 "params": { 00:14:19.765 "impl_name": "uring", 00:14:19.765 "recv_buf_size": 2097152, 00:14:19.765 "send_buf_size": 2097152, 00:14:19.765 "enable_recv_pipe": true, 00:14:19.765 "enable_quickack": false, 00:14:19.765 "enable_placement_id": 0, 00:14:19.765 "enable_zerocopy_send_server": false, 00:14:19.765 "enable_zerocopy_send_client": false, 00:14:19.765 "zerocopy_threshold": 0, 00:14:19.765 "tls_version": 0, 00:14:19.765 "enable_ktls": false 00:14:19.765 } 00:14:19.765 } 00:14:19.765 ] 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "subsystem": "vmd", 00:14:19.765 "config": [] 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "subsystem": "accel", 00:14:19.765 "config": [ 00:14:19.765 { 00:14:19.765 "method": "accel_set_options", 00:14:19.765 "params": { 00:14:19.765 "small_cache_size": 128, 00:14:19.765 "large_cache_size": 16, 00:14:19.765 "task_count": 2048, 00:14:19.765 "sequence_count": 2048, 00:14:19.765 "buf_count": 2048 00:14:19.765 } 00:14:19.765 } 00:14:19.765 ] 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "subsystem": "bdev", 00:14:19.765 "config": [ 00:14:19.765 { 00:14:19.765 "method": "bdev_set_options", 00:14:19.765 "params": { 00:14:19.765 "bdev_io_pool_size": 65535, 00:14:19.765 "bdev_io_cache_size": 256, 00:14:19.765 "bdev_auto_examine": true, 00:14:19.765 "iobuf_small_cache_size": 128, 00:14:19.765 "iobuf_large_cache_size": 16 00:14:19.765 } 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "method": "bdev_raid_set_options", 00:14:19.765 "params": { 00:14:19.765 "process_window_size_kb": 1024, 00:14:19.765 "process_max_bandwidth_mb_sec": 0 00:14:19.765 } 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "method": "bdev_iscsi_set_options", 00:14:19.765 "params": { 00:14:19.765 "timeout_sec": 30 00:14:19.765 } 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "method": "bdev_nvme_set_options", 00:14:19.765 "params": { 00:14:19.765 "action_on_timeout": "none", 00:14:19.765 "timeout_us": 0, 00:14:19.765 "timeout_admin_us": 0, 00:14:19.765 "keep_alive_timeout_ms": 10000, 00:14:19.765 "arbitration_burst": 0, 00:14:19.765 "low_priority_weight": 0, 00:14:19.765 "medium_priority_weight": 0, 00:14:19.765 "high_priority_weight": 0, 00:14:19.766 "nvme_adminq_poll_period_us": 10000, 00:14:19.766 "nvme_ioq_poll_period_us": 0, 00:14:19.766 "io_queue_requests": 0, 00:14:19.766 "delay_cmd_submit": true, 00:14:19.766 "transport_retry_count": 4, 00:14:19.766 "bdev_retry_count": 3, 00:14:19.766 "transport_ack_timeout": 0, 00:14:19.766 "ctrlr_loss_timeout_sec": 0, 00:14:19.766 "reconnect_delay_sec": 0, 00:14:19.766 "fast_io_fail_timeout_sec": 0, 00:14:19.766 "disable_auto_failback": false, 00:14:19.766 "generate_uuids": false, 00:14:19.766 "transport_tos": 0, 00:14:19.766 "nvme_error_stat": false, 00:14:19.766 "rdma_srq_size": 0, 00:14:19.766 "io_path_stat": false, 00:14:19.766 "allow_accel_sequence": false, 00:14:19.766 "rdma_max_cq_size": 0, 00:14:19.766 "rdma_cm_event_timeout_ms": 0, 00:14:19.766 "dhchap_digests": [ 00:14:19.766 "sha256", 00:14:19.766 "sha384", 00:14:19.766 "sha512" 00:14:19.766 ], 00:14:19.766 "dhchap_dhgroups": [ 00:14:19.766 "null", 00:14:19.766 "ffdhe2048", 00:14:19.766 "ffdhe3072", 00:14:19.766 "ffdhe4096", 00:14:19.766 "ffdhe6144", 00:14:19.766 "ffdhe8192" 00:14:19.766 ] 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "bdev_nvme_set_hotplug", 00:14:19.766 "params": { 00:14:19.766 "period_us": 100000, 00:14:19.766 "enable": false 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "bdev_malloc_create", 00:14:19.766 "params": { 00:14:19.766 "name": "malloc0", 00:14:19.766 "num_blocks": 8192, 00:14:19.766 "block_size": 4096, 00:14:19.766 "physical_block_size": 4096, 00:14:19.766 "uuid": "4b987c16-2982-467c-aa21-a0f09cb38c3f", 00:14:19.766 "optimal_io_boundary": 0, 00:14:19.766 "md_size": 0, 00:14:19.766 "dif_type": 0, 00:14:19.766 "dif_is_head_of_md": false, 00:14:19.766 "dif_pi_format": 0 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "bdev_wait_for_examine" 00:14:19.766 } 00:14:19.766 ] 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "subsystem": "nbd", 00:14:19.766 "config": [] 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "subsystem": "scheduler", 00:14:19.766 "config": [ 00:14:19.766 { 00:14:19.766 "method": "framework_set_scheduler", 00:14:19.766 "params": { 00:14:19.766 "name": "static" 00:14:19.766 } 00:14:19.766 } 00:14:19.766 ] 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "subsystem": "nvmf", 00:14:19.766 "config": [ 00:14:19.766 { 00:14:19.766 "method": "nvmf_set_config", 00:14:19.766 "params": { 00:14:19.766 "discovery_filter": "match_any", 00:14:19.766 "admin_cmd_passthru": { 00:14:19.766 "identify_ctrlr": false 00:14:19.766 } 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "nvmf_set_max_subsystems", 00:14:19.766 "params": { 00:14:19.766 "max_subsystems": 1024 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "nvmf_set_crdt", 00:14:19.766 "params": { 00:14:19.766 "crdt1": 0, 00:14:19.766 "crdt2": 0, 00:14:19.766 "crdt3": 0 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "nvmf_create_transport", 00:14:19.766 "params": { 00:14:19.766 "trtype": "TCP", 00:14:19.766 "max_queue_depth": 128, 00:14:19.766 "max_io_qpairs_per_ctrlr": 127, 00:14:19.766 "in_capsule_data_size": 4096, 00:14:19.766 "max_io_size": 131072, 00:14:19.766 "io_unit_size": 131072, 00:14:19.766 "max_aq_depth": 128, 00:14:19.766 "num_shared_buffers": 511, 00:14:19.766 "buf_cache_size": 4294967295, 00:14:19.766 "dif_insert_or_strip": false, 00:14:19.766 "zcopy": false, 00:14:19.766 "c2h_success": false, 00:14:19.766 "sock_priority": 0, 00:14:19.766 "abort_timeout_sec": 1, 00:14:19.766 "ack_timeout": 0, 00:14:19.766 "data_wr_pool_size": 0 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "nvmf_create_subsystem", 00:14:19.766 "params": { 00:14:19.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.766 "allow_any_host": false, 00:14:19.766 "serial_number": "SPDK00000000000001", 00:14:19.766 "model_number": "SPDK bdev Controller", 00:14:19.766 "max_namespaces": 10, 00:14:19.766 "min_cntlid": 1, 00:14:19.766 "max_cntlid": 65519, 00:14:19.766 "ana_reporting": false 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "nvmf_subsystem_add_host", 00:14:19.766 "params": { 00:14:19.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.766 "host": "nqn.2016-06.io.spdk:host1", 00:14:19.766 "psk": "/tmp/tmp.BEv6v6GMa2" 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "nvmf_subsystem_add_ns", 00:14:19.766 "params": { 00:14:19.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.766 "namespace": { 00:14:19.766 "nsid": 1, 00:14:19.766 "bdev_name": "malloc0", 00:14:19.766 "nguid": "4B987C162982467CAA21A0F09CB38C3F", 00:14:19.766 "uuid": "4b987c16-2982-467c-aa21-a0f09cb38c3f", 00:14:19.766 "no_auto_visible": false 00:14:19.766 } 00:14:19.766 } 00:14:19.766 }, 00:14:19.766 { 00:14:19.766 "method": "nvmf_subsystem_add_listener", 00:14:19.766 "params": { 00:14:19.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.766 "listen_address": { 00:14:19.766 "trtype": "TCP", 00:14:19.766 "adrfam": "IPv4", 00:14:19.766 "traddr": "10.0.0.2", 00:14:19.766 "trsvcid": "4420" 00:14:19.766 }, 00:14:19.766 "secure_channel": true 00:14:19.766 } 00:14:19.766 } 00:14:19.766 ] 00:14:19.766 } 00:14:19.766 ] 00:14:19.766 }' 00:14:19.766 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:20.333 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:20.333 "subsystems": [ 00:14:20.333 { 00:14:20.333 "subsystem": "keyring", 00:14:20.333 "config": [] 00:14:20.333 }, 00:14:20.333 { 00:14:20.333 "subsystem": "iobuf", 00:14:20.333 "config": [ 00:14:20.333 { 00:14:20.333 "method": "iobuf_set_options", 00:14:20.333 "params": { 00:14:20.333 "small_pool_count": 8192, 00:14:20.333 "large_pool_count": 1024, 00:14:20.333 "small_bufsize": 8192, 00:14:20.333 "large_bufsize": 135168 00:14:20.333 } 00:14:20.333 } 00:14:20.333 ] 00:14:20.333 }, 00:14:20.333 { 00:14:20.333 "subsystem": "sock", 00:14:20.333 "config": [ 00:14:20.333 { 00:14:20.333 "method": "sock_set_default_impl", 00:14:20.333 "params": { 00:14:20.333 "impl_name": "uring" 00:14:20.333 } 00:14:20.333 }, 00:14:20.333 { 00:14:20.333 "method": "sock_impl_set_options", 00:14:20.333 "params": { 00:14:20.333 "impl_name": "ssl", 00:14:20.333 "recv_buf_size": 4096, 00:14:20.333 "send_buf_size": 4096, 00:14:20.333 "enable_recv_pipe": true, 00:14:20.333 "enable_quickack": false, 00:14:20.333 "enable_placement_id": 0, 00:14:20.333 "enable_zerocopy_send_server": true, 00:14:20.333 "enable_zerocopy_send_client": false, 00:14:20.333 "zerocopy_threshold": 0, 00:14:20.333 "tls_version": 0, 00:14:20.333 "enable_ktls": false 00:14:20.333 } 00:14:20.333 }, 00:14:20.333 { 00:14:20.333 "method": "sock_impl_set_options", 00:14:20.333 "params": { 00:14:20.333 "impl_name": "posix", 00:14:20.333 "recv_buf_size": 2097152, 00:14:20.333 "send_buf_size": 2097152, 00:14:20.333 "enable_recv_pipe": true, 00:14:20.333 "enable_quickack": false, 00:14:20.333 "enable_placement_id": 0, 00:14:20.333 "enable_zerocopy_send_server": true, 00:14:20.333 "enable_zerocopy_send_client": false, 00:14:20.333 "zerocopy_threshold": 0, 00:14:20.333 "tls_version": 0, 00:14:20.333 "enable_ktls": false 00:14:20.333 } 00:14:20.333 }, 00:14:20.333 { 00:14:20.333 "method": "sock_impl_set_options", 00:14:20.333 "params": { 00:14:20.333 "impl_name": "uring", 00:14:20.333 "recv_buf_size": 2097152, 00:14:20.334 "send_buf_size": 2097152, 00:14:20.334 "enable_recv_pipe": true, 00:14:20.334 "enable_quickack": false, 00:14:20.334 "enable_placement_id": 0, 00:14:20.334 "enable_zerocopy_send_server": false, 00:14:20.334 "enable_zerocopy_send_client": false, 00:14:20.334 "zerocopy_threshold": 0, 00:14:20.334 "tls_version": 0, 00:14:20.334 "enable_ktls": false 00:14:20.334 } 00:14:20.334 } 00:14:20.334 ] 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "subsystem": "vmd", 00:14:20.334 "config": [] 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "subsystem": "accel", 00:14:20.334 "config": [ 00:14:20.334 { 00:14:20.334 "method": "accel_set_options", 00:14:20.334 "params": { 00:14:20.334 "small_cache_size": 128, 00:14:20.334 "large_cache_size": 16, 00:14:20.334 "task_count": 2048, 00:14:20.334 "sequence_count": 2048, 00:14:20.334 "buf_count": 2048 00:14:20.334 } 00:14:20.334 } 00:14:20.334 ] 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "subsystem": "bdev", 00:14:20.334 "config": [ 00:14:20.334 { 00:14:20.334 "method": "bdev_set_options", 00:14:20.334 "params": { 00:14:20.334 "bdev_io_pool_size": 65535, 00:14:20.334 "bdev_io_cache_size": 256, 00:14:20.334 "bdev_auto_examine": true, 00:14:20.334 "iobuf_small_cache_size": 128, 00:14:20.334 "iobuf_large_cache_size": 16 00:14:20.334 } 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "method": "bdev_raid_set_options", 00:14:20.334 "params": { 00:14:20.334 "process_window_size_kb": 1024, 00:14:20.334 "process_max_bandwidth_mb_sec": 0 00:14:20.334 } 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "method": "bdev_iscsi_set_options", 00:14:20.334 "params": { 00:14:20.334 "timeout_sec": 30 00:14:20.334 } 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "method": "bdev_nvme_set_options", 00:14:20.334 "params": { 00:14:20.334 "action_on_timeout": "none", 00:14:20.334 "timeout_us": 0, 00:14:20.334 "timeout_admin_us": 0, 00:14:20.334 "keep_alive_timeout_ms": 10000, 00:14:20.334 "arbitration_burst": 0, 00:14:20.334 "low_priority_weight": 0, 00:14:20.334 "medium_priority_weight": 0, 00:14:20.334 "high_priority_weight": 0, 00:14:20.334 "nvme_adminq_poll_period_us": 10000, 00:14:20.334 "nvme_ioq_poll_period_us": 0, 00:14:20.334 "io_queue_requests": 512, 00:14:20.334 "delay_cmd_submit": true, 00:14:20.334 "transport_retry_count": 4, 00:14:20.334 "bdev_retry_count": 3, 00:14:20.334 "transport_ack_timeout": 0, 00:14:20.334 "ctrlr_loss_timeout_sec": 0, 00:14:20.334 "reconnect_delay_sec": 0, 00:14:20.334 "fast_io_fail_timeout_sec": 0, 00:14:20.334 "disable_auto_failback": false, 00:14:20.334 "generate_uuids": false, 00:14:20.334 "transport_tos": 0, 00:14:20.334 "nvme_error_stat": false, 00:14:20.334 "rdma_srq_size": 0, 00:14:20.334 "io_path_stat": false, 00:14:20.334 "allow_accel_sequence": false, 00:14:20.334 "rdma_max_cq_size": 0, 00:14:20.334 "rdma_cm_event_timeout_ms": 0, 00:14:20.334 "dhchap_digests": [ 00:14:20.334 "sha256", 00:14:20.334 "sha384", 00:14:20.334 "sha512" 00:14:20.334 ], 00:14:20.334 "dhchap_dhgroups": [ 00:14:20.334 "null", 00:14:20.334 "ffdhe2048", 00:14:20.334 "ffdhe3072", 00:14:20.334 "ffdhe4096", 00:14:20.334 "ffdhe6144", 00:14:20.334 "ffdhe8192" 00:14:20.334 ] 00:14:20.334 } 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "method": "bdev_nvme_attach_controller", 00:14:20.334 "params": { 00:14:20.334 "name": "TLSTEST", 00:14:20.334 "trtype": "TCP", 00:14:20.334 "adrfam": "IPv4", 00:14:20.334 "traddr": "10.0.0.2", 00:14:20.334 "trsvcid": "4420", 00:14:20.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.334 "prchk_reftag": false, 00:14:20.334 "prchk_guard": false, 00:14:20.334 "ctrlr_loss_timeout_sec": 0, 00:14:20.334 "reconnect_delay_sec": 0, 00:14:20.334 "fast_io_fail_timeout_sec": 0, 00:14:20.334 "psk": "/tmp/tmp.BEv6v6GMa2", 00:14:20.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.334 "hdgst": false, 00:14:20.334 "ddgst": false 00:14:20.334 } 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "method": "bdev_nvme_set_hotplug", 00:14:20.334 "params": { 00:14:20.334 "period_us": 100000, 00:14:20.334 "enable": false 00:14:20.334 } 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "method": "bdev_wait_for_examine" 00:14:20.334 } 00:14:20.334 ] 00:14:20.334 }, 00:14:20.334 { 00:14:20.334 "subsystem": "nbd", 00:14:20.334 "config": [] 00:14:20.334 } 00:14:20.334 ] 00:14:20.334 }' 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 72938 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72938 ']' 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72938 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72938 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:20.334 killing process with pid 72938 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72938' 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72938 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72938 00:14:20.334 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.334 00:14:20.334 Latency(us) 00:14:20.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.334 =================================================================================================================== 00:14:20.334 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.334 [2024-07-25 13:57:09.096142] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 72889 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72889 ']' 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72889 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72889 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72889' 00:14:20.334 killing process with pid 72889 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72889 00:14:20.334 [2024-07-25 13:57:09.355243] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:20.334 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72889 00:14:20.593 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:20.593 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.593 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:20.593 "subsystems": [ 00:14:20.593 { 00:14:20.593 "subsystem": "keyring", 00:14:20.593 "config": [] 00:14:20.593 }, 00:14:20.593 { 00:14:20.593 "subsystem": "iobuf", 00:14:20.593 "config": [ 00:14:20.593 { 00:14:20.593 "method": "iobuf_set_options", 00:14:20.594 "params": { 00:14:20.594 "small_pool_count": 8192, 00:14:20.594 "large_pool_count": 1024, 00:14:20.594 "small_bufsize": 8192, 00:14:20.594 "large_bufsize": 135168 00:14:20.594 } 00:14:20.594 } 00:14:20.594 ] 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "subsystem": "sock", 00:14:20.594 "config": [ 00:14:20.594 { 00:14:20.594 "method": "sock_set_default_impl", 00:14:20.594 "params": { 00:14:20.594 "impl_name": "uring" 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "sock_impl_set_options", 00:14:20.594 "params": { 00:14:20.594 "impl_name": "ssl", 00:14:20.594 "recv_buf_size": 4096, 00:14:20.594 "send_buf_size": 4096, 00:14:20.594 "enable_recv_pipe": true, 00:14:20.594 "enable_quickack": false, 00:14:20.594 "enable_placement_id": 0, 00:14:20.594 "enable_zerocopy_send_server": true, 00:14:20.594 "enable_zerocopy_send_client": false, 00:14:20.594 "zerocopy_threshold": 0, 00:14:20.594 "tls_version": 0, 00:14:20.594 "enable_ktls": false 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "sock_impl_set_options", 00:14:20.594 "params": { 00:14:20.594 "impl_name": "posix", 00:14:20.594 "recv_buf_size": 2097152, 00:14:20.594 "send_buf_size": 2097152, 00:14:20.594 "enable_recv_pipe": true, 00:14:20.594 "enable_quickack": false, 00:14:20.594 "enable_placement_id": 0, 00:14:20.594 "enable_zerocopy_send_server": true, 00:14:20.594 "enable_zerocopy_send_client": false, 00:14:20.594 "zerocopy_threshold": 0, 00:14:20.594 "tls_version": 0, 00:14:20.594 "enable_ktls": false 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "sock_impl_set_options", 00:14:20.594 "params": { 00:14:20.594 "impl_name": "uring", 00:14:20.594 "recv_buf_size": 2097152, 00:14:20.594 "send_buf_size": 2097152, 00:14:20.594 "enable_recv_pipe": true, 00:14:20.594 "enable_quickack": false, 00:14:20.594 "enable_placement_id": 0, 00:14:20.594 "enable_zerocopy_send_server": false, 00:14:20.594 "enable_zerocopy_send_client": false, 00:14:20.594 "zerocopy_threshold": 0, 00:14:20.594 "tls_version": 0, 00:14:20.594 "enable_ktls": false 00:14:20.594 } 00:14:20.594 } 00:14:20.594 ] 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "subsystem": "vmd", 00:14:20.594 "config": [] 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "subsystem": "accel", 00:14:20.594 "config": [ 00:14:20.594 { 00:14:20.594 "method": "accel_set_options", 00:14:20.594 "params": { 00:14:20.594 "small_cache_size": 128, 00:14:20.594 "large_cache_size": 16, 00:14:20.594 "task_count": 2048, 00:14:20.594 "sequence_count": 2048, 00:14:20.594 "buf_count": 2048 00:14:20.594 } 00:14:20.594 } 00:14:20.594 ] 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "subsystem": "bdev", 00:14:20.594 "config": [ 00:14:20.594 { 00:14:20.594 "method": "bdev_set_options", 00:14:20.594 "params": { 00:14:20.594 "bdev_io_pool_size": 65535, 00:14:20.594 "bdev_io_cache_size": 256, 00:14:20.594 "bdev_auto_examine": true, 00:14:20.594 "iobuf_small_cache_size": 128, 00:14:20.594 "iobuf_large_cache_size": 16 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "bdev_raid_set_options", 00:14:20.594 "params": { 00:14:20.594 "process_window_size_kb": 1024, 00:14:20.594 "process_max_bandwidth_mb_sec": 0 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "bdev_iscsi_set_options", 00:14:20.594 "params": { 00:14:20.594 "timeout_sec": 30 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "bdev_nvme_set_options", 00:14:20.594 "params": { 00:14:20.594 "action_on_timeout": "none", 00:14:20.594 "timeout_us": 0, 00:14:20.594 "timeout_admin_us": 0, 00:14:20.594 "keep_alive_timeout_ms": 10000, 00:14:20.594 "arbitration_burst": 0, 00:14:20.594 "low_priority_weight": 0, 00:14:20.594 "medium_priority_weight": 0, 00:14:20.594 "high_priority_weight": 0, 00:14:20.594 "nvme_adminq_poll_period_us": 10000, 00:14:20.594 "nvme_ioq_poll_period_us": 0, 00:14:20.594 "io_queue_requests": 0, 00:14:20.594 "delay_cmd_submit": true, 00:14:20.594 "transport_retry_count": 4, 00:14:20.594 "bdev_retry_count": 3, 00:14:20.594 "transport_ack_timeout": 0, 00:14:20.594 "ctrlr_loss_timeout_sec": 0, 00:14:20.594 "reconnect_delay_sec": 0, 00:14:20.594 "fast_io_fail_timeout_sec": 0, 00:14:20.594 "disable_auto_failback": false, 00:14:20.594 "generate_uuids": false, 00:14:20.594 "transport_tos": 0, 00:14:20.594 "nvme_error_stat": false, 00:14:20.594 "rdma_srq_size": 0, 00:14:20.594 "io_path_stat": false, 00:14:20.594 "allow_accel_sequence": false, 00:14:20.594 "rdma_max_cq_size": 0, 00:14:20.594 "rdma_cm_event_timeout_ms": 0, 00:14:20.594 "dhchap_digests": [ 00:14:20.594 "sha256", 00:14:20.594 "sha384", 00:14:20.594 "sha512" 00:14:20.594 ], 00:14:20.594 "dhchap_dhgroups": [ 00:14:20.594 "null", 00:14:20.594 "ffdhe2048", 00:14:20.594 "ffdhe3072", 00:14:20.594 "ffdhe4096", 00:14:20.594 "ffdhe6144", 00:14:20.594 "ffdhe8192" 00:14:20.594 ] 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "bdev_nvme_set_hotplug", 00:14:20.594 "params": { 00:14:20.594 "period_us": 100000, 00:14:20.594 "enable": false 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "bdev_malloc_create", 00:14:20.594 "params": { 00:14:20.594 "name": "malloc0", 00:14:20.594 "num_blocks": 8192, 00:14:20.594 "block_size": 4096, 00:14:20.594 "physical_block_size": 4096, 00:14:20.594 "uuid": "4b987c16-2982-467c-aa21-a0f09cb38c3f", 00:14:20.594 "optimal_io_boundary": 0, 00:14:20.594 "md_size": 0, 00:14:20.594 "dif_type": 0, 00:14:20.594 "dif_is_head_of_md": false, 00:14:20.594 "dif_pi_format": 0 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "bdev_wait_for_examine" 00:14:20.594 } 00:14:20.594 ] 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "subsystem": "nbd", 00:14:20.594 "config": [] 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "subsystem": "scheduler", 00:14:20.594 "config": [ 00:14:20.594 { 00:14:20.594 "method": "framework_set_scheduler", 00:14:20.594 "params": { 00:14:20.594 "name": "static" 00:14:20.594 } 00:14:20.594 } 00:14:20.594 ] 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "subsystem": "nvmf", 00:14:20.594 "config": [ 00:14:20.594 { 00:14:20.594 "method": "nvmf_set_config", 00:14:20.594 "params": { 00:14:20.594 "discovery_filter": "match_any", 00:14:20.594 "admin_cmd_passthru": { 00:14:20.594 "identify_ctrlr": false 00:14:20.594 } 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "nvmf_set_max_subsystems", 00:14:20.594 "params": { 00:14:20.594 "max_subsystems": 1024 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "nvmf_set_crdt", 00:14:20.594 "params": { 00:14:20.594 "crdt1": 0, 00:14:20.594 "crdt2": 0, 00:14:20.594 "crdt3": 0 00:14:20.594 } 00:14:20.594 }, 00:14:20.594 { 00:14:20.594 "method": "nvmf_create_transport", 00:14:20.594 "params": { 00:14:20.594 "trtype": "TCP", 00:14:20.595 "max_queue_depth": 128, 00:14:20.595 "max_io_qpairs_per_ctrlr": 127, 00:14:20.595 "in_capsule_data_size": 4096, 00:14:20.595 "max_io_size": 131072, 00:14:20.595 "io_unit_size": 131072, 00:14:20.595 "max_aq_depth": 128, 00:14:20.595 "num_shared_buffers": 511, 00:14:20.595 "buf_cache_size": 4294967295, 00:14:20.595 "dif_insert_or_strip": false, 00:14:20.595 "zcopy": false, 00:14:20.595 "c2h_success": false, 00:14:20.595 "sock_priority": 0, 00:14:20.595 "abort_timeout_sec": 1, 00:14:20.595 "ack_timeout": 0, 00:14:20.595 "data_wr_pool_size": 0 00:14:20.595 } 00:14:20.595 }, 00:14:20.595 { 00:14:20.595 "method": "nvmf_create_subsystem", 00:14:20.595 "params": { 00:14:20.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.595 "allow_any_host": false, 00:14:20.595 "serial_number": "SPDK00000000000001", 00:14:20.595 "model_number": "SPDK bdev Controller", 00:14:20.595 "max_namespaces": 10, 00:14:20.595 "min_cntlid": 1, 00:14:20.595 "max_cntlid": 65519, 00:14:20.595 "ana_reporting": false 00:14:20.595 } 00:14:20.595 }, 00:14:20.595 { 00:14:20.595 "method": "nvmf_subsystem_add_host", 00:14:20.595 "params": { 00:14:20.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.595 "host": "nqn.2016-06.io.spdk:host1", 00:14:20.595 "psk": "/tmp/tmp.BEv6v6GMa2" 00:14:20.595 } 00:14:20.595 }, 00:14:20.595 { 00:14:20.595 "method": "nvmf_subsystem_add_ns", 00:14:20.595 "params": { 00:14:20.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.595 "namespace": { 00:14:20.595 "nsid": 1, 00:14:20.595 "bdev_name": "malloc0", 00:14:20.595 "nguid": "4B987C162982467CAA21A0F09CB38C3F", 00:14:20.595 "uuid": "4b987c16-2982-467c-aa21-a0f09cb38c3f", 00:14:20.595 "no_auto_visible": false 00:14:20.595 } 00:14:20.595 } 00:14:20.595 }, 00:14:20.595 { 00:14:20.595 "method": "nvmf_subsystem_add_listener", 00:14:20.595 "params": { 00:14:20.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.595 "listen_address": { 00:14:20.595 "trtype": "TCP", 00:14:20.595 "adrfam": "IPv4", 00:14:20.595 "traddr": "10.0.0.2", 00:14:20.595 "trsvcid": "4420" 00:14:20.595 }, 00:14:20.595 "secure_channel": true 00:14:20.595 } 00:14:20.595 } 00:14:20.595 ] 00:14:20.595 } 00:14:20.595 ] 00:14:20.595 }' 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72991 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72991 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72991 ']' 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.595 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.854 [2024-07-25 13:57:09.646667] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:20.854 [2024-07-25 13:57:09.646762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.854 [2024-07-25 13:57:09.780142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.111 [2024-07-25 13:57:09.895455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.111 [2024-07-25 13:57:09.895517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.111 [2024-07-25 13:57:09.895528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.111 [2024-07-25 13:57:09.895536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.111 [2024-07-25 13:57:09.895543] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.111 [2024-07-25 13:57:09.895639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.111 [2024-07-25 13:57:10.061552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:21.111 [2024-07-25 13:57:10.131595] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.369 [2024-07-25 13:57:10.147507] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:21.369 [2024-07-25 13:57:10.163540] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:21.369 [2024-07-25 13:57:10.171574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.627 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.627 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:21.627 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.627 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.627 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.885 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.885 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73023 00:14:21.886 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73023 /var/tmp/bdevperf.sock 00:14:21.886 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73023 ']' 00:14:21.886 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.886 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.886 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:21.886 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.886 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.886 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:21.886 "subsystems": [ 00:14:21.886 { 00:14:21.886 "subsystem": "keyring", 00:14:21.886 "config": [] 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "subsystem": "iobuf", 00:14:21.886 "config": [ 00:14:21.886 { 00:14:21.886 "method": "iobuf_set_options", 00:14:21.886 "params": { 00:14:21.886 "small_pool_count": 8192, 00:14:21.886 "large_pool_count": 1024, 00:14:21.886 "small_bufsize": 8192, 00:14:21.886 "large_bufsize": 135168 00:14:21.886 } 00:14:21.886 } 00:14:21.886 ] 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "subsystem": "sock", 00:14:21.886 "config": [ 00:14:21.886 { 00:14:21.886 "method": "sock_set_default_impl", 00:14:21.886 "params": { 00:14:21.886 "impl_name": "uring" 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "sock_impl_set_options", 00:14:21.886 "params": { 00:14:21.886 "impl_name": "ssl", 00:14:21.886 "recv_buf_size": 4096, 00:14:21.886 "send_buf_size": 4096, 00:14:21.886 "enable_recv_pipe": true, 00:14:21.886 "enable_quickack": false, 00:14:21.886 "enable_placement_id": 0, 00:14:21.886 "enable_zerocopy_send_server": true, 00:14:21.886 "enable_zerocopy_send_client": false, 00:14:21.886 "zerocopy_threshold": 0, 00:14:21.886 "tls_version": 0, 00:14:21.886 "enable_ktls": false 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "sock_impl_set_options", 00:14:21.886 "params": { 00:14:21.886 "impl_name": "posix", 00:14:21.886 "recv_buf_size": 2097152, 00:14:21.886 "send_buf_size": 2097152, 00:14:21.886 "enable_recv_pipe": true, 00:14:21.886 "enable_quickack": false, 00:14:21.886 "enable_placement_id": 0, 00:14:21.886 "enable_zerocopy_send_server": true, 00:14:21.886 "enable_zerocopy_send_client": false, 00:14:21.886 "zerocopy_threshold": 0, 00:14:21.886 "tls_version": 0, 00:14:21.886 "enable_ktls": false 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "sock_impl_set_options", 00:14:21.886 "params": { 00:14:21.886 "impl_name": "uring", 00:14:21.886 "recv_buf_size": 2097152, 00:14:21.886 "send_buf_size": 2097152, 00:14:21.886 "enable_recv_pipe": true, 00:14:21.886 "enable_quickack": false, 00:14:21.886 "enable_placement_id": 0, 00:14:21.886 "enable_zerocopy_send_server": false, 00:14:21.886 "enable_zerocopy_send_client": false, 00:14:21.886 "zerocopy_threshold": 0, 00:14:21.886 "tls_version": 0, 00:14:21.886 "enable_ktls": false 00:14:21.886 } 00:14:21.886 } 00:14:21.886 ] 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "subsystem": "vmd", 00:14:21.886 "config": [] 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "subsystem": "accel", 00:14:21.886 "config": [ 00:14:21.886 { 00:14:21.886 "method": "accel_set_options", 00:14:21.886 "params": { 00:14:21.886 "small_cache_size": 128, 00:14:21.886 "large_cache_size": 16, 00:14:21.886 "task_count": 2048, 00:14:21.886 "sequence_count": 2048, 00:14:21.886 "buf_count": 2048 00:14:21.886 } 00:14:21.886 } 00:14:21.886 ] 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "subsystem": "bdev", 00:14:21.886 "config": [ 00:14:21.886 { 00:14:21.886 "method": "bdev_set_options", 00:14:21.886 "params": { 00:14:21.886 "bdev_io_pool_size": 65535, 00:14:21.886 "bdev_io_cache_size": 256, 00:14:21.886 "bdev_auto_examine": true, 00:14:21.886 "iobuf_small_cache_size": 128, 00:14:21.886 "iobuf_large_cache_size": 16 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "bdev_raid_set_options", 00:14:21.886 "params": { 00:14:21.886 "process_window_size_kb": 1024, 00:14:21.886 "process_max_bandwidth_mb_sec": 0 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "bdev_iscsi_set_options", 00:14:21.886 "params": { 00:14:21.886 "timeout_sec": 30 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "bdev_nvme_set_options", 00:14:21.886 "params": { 00:14:21.886 "action_on_timeout": "none", 00:14:21.886 "timeout_us": 0, 00:14:21.886 "timeout_admin_us": 0, 00:14:21.886 "keep_alive_timeout_ms": 10000, 00:14:21.886 "arbitration_burst": 0, 00:14:21.886 "low_priority_weight": 0, 00:14:21.886 "medium_priority_weight": 0, 00:14:21.886 "high_priority_weight": 0, 00:14:21.886 "nvme_adminq_poll_period_us": 10000, 00:14:21.886 "nvme_ioq_poll_period_us": 0, 00:14:21.886 "io_queue_requests": 512, 00:14:21.886 "delay_cmd_submit": true, 00:14:21.886 "transport_retry_count": 4, 00:14:21.886 "bdev_retry_count": 3, 00:14:21.886 "transport_ack_timeout": 0, 00:14:21.886 "ctrlr_loss_timeout_sec": 0, 00:14:21.886 "reconnect_delay_sec": 0, 00:14:21.886 "fast_io_fail_timeout_sec": 0, 00:14:21.886 "disable_auto_failback": false, 00:14:21.886 "generate_uuids": false, 00:14:21.886 "transport_tos": 0, 00:14:21.886 "nvme_error_stat": false, 00:14:21.886 "rdma_srq_size": 0, 00:14:21.886 "io_path_stat": false, 00:14:21.886 "allow_accel_sequence": false, 00:14:21.886 "rdma_max_cq_size": 0, 00:14:21.886 "rdma_cm_event_timeout_ms": 0, 00:14:21.886 "dhchap_digests": [ 00:14:21.886 "sha256", 00:14:21.886 "sha384", 00:14:21.886 "sha512" 00:14:21.886 ], 00:14:21.886 "dhchap_dhgroups": [ 00:14:21.886 "null", 00:14:21.886 "ffdhe2048", 00:14:21.886 "ffdhe3072", 00:14:21.886 "ffdhe4096", 00:14:21.886 "ffdhe6144", 00:14:21.886 "ffdhe8192" 00:14:21.886 ] 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "bdev_nvme_attach_controller", 00:14:21.886 "params": { 00:14:21.886 "name": "TLSTEST", 00:14:21.886 "trtype": "TCP", 00:14:21.886 "adrfam": "IPv4", 00:14:21.886 "traddr": "10.0.0.2", 00:14:21.886 "trsvcid": "4420", 00:14:21.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.886 "prchk_reftag": false, 00:14:21.886 "prchk_guard": false, 00:14:21.886 "ctrlr_loss_timeout_sec": 0, 00:14:21.886 "reconnect_delay_sec": 0, 00:14:21.886 "fast_io_fail_timeout_sec": 0, 00:14:21.886 "psk": "/tmp/tmp.BEv6v6GMa2", 00:14:21.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.886 "hdgst": false, 00:14:21.886 "ddgst": false 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "bdev_nvme_set_hotplug", 00:14:21.886 "params": { 00:14:21.886 "period_us": 100000, 00:14:21.886 "enable": false 00:14:21.886 } 00:14:21.886 }, 00:14:21.886 { 00:14:21.886 "method": "bdev_wait_for_examine" 00:14:21.887 } 00:14:21.887 ] 00:14:21.887 }, 00:14:21.887 { 00:14:21.887 "subsystem": "nbd", 00:14:21.887 "config": [] 00:14:21.887 } 00:14:21.887 ] 00:14:21.887 }' 00:14:21.887 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.887 [2024-07-25 13:57:10.746287] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:21.887 [2024-07-25 13:57:10.746436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73023 ] 00:14:21.887 [2024-07-25 13:57:10.902290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.145 [2024-07-25 13:57:11.049271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.403 [2024-07-25 13:57:11.185207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.403 [2024-07-25 13:57:11.225586] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.403 [2024-07-25 13:57:11.225715] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:22.969 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.969 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:22.969 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:23.228 Running I/O for 10 seconds... 00:14:33.211 00:14:33.211 Latency(us) 00:14:33.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.211 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:33.211 Verification LBA range: start 0x0 length 0x2000 00:14:33.211 TLSTESTn1 : 10.02 4046.56 15.81 0.00 0.00 31569.74 6374.87 30027.40 00:14:33.212 =================================================================================================================== 00:14:33.212 Total : 4046.56 15.81 0.00 0.00 31569.74 6374.87 30027.40 00:14:33.212 0 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 73023 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73023 ']' 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73023 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73023 00:14:33.212 killing process with pid 73023 00:14:33.212 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.212 00:14:33.212 Latency(us) 00:14:33.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.212 =================================================================================================================== 00:14:33.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73023' 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73023 00:14:33.212 [2024-07-25 13:57:22.102357] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:33.212 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73023 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 72991 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72991 ']' 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72991 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72991 00:14:33.471 killing process with pid 72991 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72991' 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72991 00:14:33.471 [2024-07-25 13:57:22.359854] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:33.471 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72991 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73163 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73163 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73163 ']' 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.729 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.729 [2024-07-25 13:57:22.666050] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:33.729 [2024-07-25 13:57:22.667122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.988 [2024-07-25 13:57:22.807770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.988 [2024-07-25 13:57:22.956700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.988 [2024-07-25 13:57:22.956772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.988 [2024-07-25 13:57:22.956784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.988 [2024-07-25 13:57:22.956793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.988 [2024-07-25 13:57:22.956800] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.988 [2024-07-25 13:57:22.956843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.246 [2024-07-25 13:57:23.030750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.BEv6v6GMa2 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BEv6v6GMa2 00:14:34.813 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:35.071 [2024-07-25 13:57:24.015257] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.071 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:35.330 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:35.588 [2024-07-25 13:57:24.523420] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:35.588 [2024-07-25 13:57:24.523734] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.588 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:35.846 malloc0 00:14:35.846 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:36.105 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BEv6v6GMa2 00:14:36.364 [2024-07-25 13:57:25.378666] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73218 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73218 /var/tmp/bdevperf.sock 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73218 ']' 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:36.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.623 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.623 [2024-07-25 13:57:25.459140] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:36.623 [2024-07-25 13:57:25.459257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:14:36.623 [2024-07-25 13:57:25.598857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.882 [2024-07-25 13:57:25.728531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.882 [2024-07-25 13:57:25.784915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:37.449 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.449 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:37.449 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BEv6v6GMa2 00:14:37.707 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:37.966 [2024-07-25 13:57:26.964068] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.225 nvme0n1 00:14:38.225 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:38.225 Running I/O for 1 seconds... 00:14:39.603 00:14:39.603 Latency(us) 00:14:39.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.603 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:39.603 Verification LBA range: start 0x0 length 0x2000 00:14:39.603 nvme0n1 : 1.02 3808.78 14.88 0.00 0.00 33244.55 7387.69 24188.74 00:14:39.603 =================================================================================================================== 00:14:39.603 Total : 3808.78 14.88 0.00 0.00 33244.55 7387.69 24188.74 00:14:39.603 0 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 73218 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73218 ']' 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73218 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73218 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:39.603 killing process with pid 73218 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73218' 00:14:39.603 Received shutdown signal, test time was about 1.000000 seconds 00:14:39.603 00:14:39.603 Latency(us) 00:14:39.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.603 =================================================================================================================== 00:14:39.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73218 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73218 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 73163 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73163 ']' 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73163 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73163 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.603 killing process with pid 73163 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73163' 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73163 00:14:39.603 [2024-07-25 13:57:28.491344] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:39.603 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73163 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73269 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73269 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73269 ']' 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.862 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.862 [2024-07-25 13:57:28.887778] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:39.862 [2024-07-25 13:57:28.887887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.172 [2024-07-25 13:57:29.019425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.172 [2024-07-25 13:57:29.174602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.172 [2024-07-25 13:57:29.174675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.172 [2024-07-25 13:57:29.174687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.172 [2024-07-25 13:57:29.174696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.172 [2024-07-25 13:57:29.174704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.172 [2024-07-25 13:57:29.174747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.432 [2024-07-25 13:57:29.255168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.998 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.998 [2024-07-25 13:57:29.980368] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.998 malloc0 00:14:40.998 [2024-07-25 13:57:30.016226] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:40.998 [2024-07-25 13:57:30.016545] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73301 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73301 /var/tmp/bdevperf.sock 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73301 ']' 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.257 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.257 [2024-07-25 13:57:30.100835] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:41.257 [2024-07-25 13:57:30.100936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73301 ] 00:14:41.257 [2024-07-25 13:57:30.243168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.515 [2024-07-25 13:57:30.373483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.515 [2024-07-25 13:57:30.431009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:42.081 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.081 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:42.081 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BEv6v6GMa2 00:14:42.338 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:42.596 [2024-07-25 13:57:31.563170] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.854 nvme0n1 00:14:42.854 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:42.854 Running I/O for 1 seconds... 00:14:43.788 00:14:43.788 Latency(us) 00:14:43.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.788 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:43.788 Verification LBA range: start 0x0 length 0x2000 00:14:43.788 nvme0n1 : 1.03 3617.69 14.13 0.00 0.00 34948.69 11736.90 25976.09 00:14:43.788 =================================================================================================================== 00:14:43.788 Total : 3617.69 14.13 0.00 0.00 34948.69 11736.90 25976.09 00:14:43.788 0 00:14:43.788 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:43.788 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.788 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.047 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.047 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:44.047 "subsystems": [ 00:14:44.047 { 00:14:44.047 "subsystem": "keyring", 00:14:44.047 "config": [ 00:14:44.047 { 00:14:44.047 "method": "keyring_file_add_key", 00:14:44.047 "params": { 00:14:44.047 "name": "key0", 00:14:44.047 "path": "/tmp/tmp.BEv6v6GMa2" 00:14:44.047 } 00:14:44.047 } 00:14:44.047 ] 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "subsystem": "iobuf", 00:14:44.047 "config": [ 00:14:44.047 { 00:14:44.047 "method": "iobuf_set_options", 00:14:44.047 "params": { 00:14:44.047 "small_pool_count": 8192, 00:14:44.047 "large_pool_count": 1024, 00:14:44.047 "small_bufsize": 8192, 00:14:44.047 "large_bufsize": 135168 00:14:44.047 } 00:14:44.047 } 00:14:44.047 ] 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "subsystem": "sock", 00:14:44.047 "config": [ 00:14:44.047 { 00:14:44.047 "method": "sock_set_default_impl", 00:14:44.047 "params": { 00:14:44.047 "impl_name": "uring" 00:14:44.047 } 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "method": "sock_impl_set_options", 00:14:44.047 "params": { 00:14:44.047 "impl_name": "ssl", 00:14:44.047 "recv_buf_size": 4096, 00:14:44.047 "send_buf_size": 4096, 00:14:44.047 "enable_recv_pipe": true, 00:14:44.047 "enable_quickack": false, 00:14:44.047 "enable_placement_id": 0, 00:14:44.047 "enable_zerocopy_send_server": true, 00:14:44.047 "enable_zerocopy_send_client": false, 00:14:44.047 "zerocopy_threshold": 0, 00:14:44.047 "tls_version": 0, 00:14:44.047 "enable_ktls": false 00:14:44.047 } 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "method": "sock_impl_set_options", 00:14:44.047 "params": { 00:14:44.047 "impl_name": "posix", 00:14:44.047 "recv_buf_size": 2097152, 00:14:44.047 "send_buf_size": 2097152, 00:14:44.047 "enable_recv_pipe": true, 00:14:44.047 "enable_quickack": false, 00:14:44.047 "enable_placement_id": 0, 00:14:44.047 "enable_zerocopy_send_server": true, 00:14:44.047 "enable_zerocopy_send_client": false, 00:14:44.047 "zerocopy_threshold": 0, 00:14:44.047 "tls_version": 0, 00:14:44.047 "enable_ktls": false 00:14:44.047 } 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "method": "sock_impl_set_options", 00:14:44.047 "params": { 00:14:44.047 "impl_name": "uring", 00:14:44.047 "recv_buf_size": 2097152, 00:14:44.047 "send_buf_size": 2097152, 00:14:44.047 "enable_recv_pipe": true, 00:14:44.047 "enable_quickack": false, 00:14:44.047 "enable_placement_id": 0, 00:14:44.047 "enable_zerocopy_send_server": false, 00:14:44.047 "enable_zerocopy_send_client": false, 00:14:44.047 "zerocopy_threshold": 0, 00:14:44.047 "tls_version": 0, 00:14:44.047 "enable_ktls": false 00:14:44.047 } 00:14:44.047 } 00:14:44.047 ] 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "subsystem": "vmd", 00:14:44.047 "config": [] 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "subsystem": "accel", 00:14:44.047 "config": [ 00:14:44.047 { 00:14:44.047 "method": "accel_set_options", 00:14:44.047 "params": { 00:14:44.047 "small_cache_size": 128, 00:14:44.047 "large_cache_size": 16, 00:14:44.047 "task_count": 2048, 00:14:44.047 "sequence_count": 2048, 00:14:44.047 "buf_count": 2048 00:14:44.047 } 00:14:44.047 } 00:14:44.047 ] 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "subsystem": "bdev", 00:14:44.047 "config": [ 00:14:44.047 { 00:14:44.047 "method": "bdev_set_options", 00:14:44.047 "params": { 00:14:44.047 "bdev_io_pool_size": 65535, 00:14:44.047 "bdev_io_cache_size": 256, 00:14:44.047 "bdev_auto_examine": true, 00:14:44.047 "iobuf_small_cache_size": 128, 00:14:44.047 "iobuf_large_cache_size": 16 00:14:44.047 } 00:14:44.047 }, 00:14:44.047 { 00:14:44.047 "method": "bdev_raid_set_options", 00:14:44.047 "params": { 00:14:44.047 "process_window_size_kb": 1024, 00:14:44.048 "process_max_bandwidth_mb_sec": 0 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "bdev_iscsi_set_options", 00:14:44.048 "params": { 00:14:44.048 "timeout_sec": 30 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "bdev_nvme_set_options", 00:14:44.048 "params": { 00:14:44.048 "action_on_timeout": "none", 00:14:44.048 "timeout_us": 0, 00:14:44.048 "timeout_admin_us": 0, 00:14:44.048 "keep_alive_timeout_ms": 10000, 00:14:44.048 "arbitration_burst": 0, 00:14:44.048 "low_priority_weight": 0, 00:14:44.048 "medium_priority_weight": 0, 00:14:44.048 "high_priority_weight": 0, 00:14:44.048 "nvme_adminq_poll_period_us": 10000, 00:14:44.048 "nvme_ioq_poll_period_us": 0, 00:14:44.048 "io_queue_requests": 0, 00:14:44.048 "delay_cmd_submit": true, 00:14:44.048 "transport_retry_count": 4, 00:14:44.048 "bdev_retry_count": 3, 00:14:44.048 "transport_ack_timeout": 0, 00:14:44.048 "ctrlr_loss_timeout_sec": 0, 00:14:44.048 "reconnect_delay_sec": 0, 00:14:44.048 "fast_io_fail_timeout_sec": 0, 00:14:44.048 "disable_auto_failback": false, 00:14:44.048 "generate_uuids": false, 00:14:44.048 "transport_tos": 0, 00:14:44.048 "nvme_error_stat": false, 00:14:44.048 "rdma_srq_size": 0, 00:14:44.048 "io_path_stat": false, 00:14:44.048 "allow_accel_sequence": false, 00:14:44.048 "rdma_max_cq_size": 0, 00:14:44.048 "rdma_cm_event_timeout_ms": 0, 00:14:44.048 "dhchap_digests": [ 00:14:44.048 "sha256", 00:14:44.048 "sha384", 00:14:44.048 "sha512" 00:14:44.048 ], 00:14:44.048 "dhchap_dhgroups": [ 00:14:44.048 "null", 00:14:44.048 "ffdhe2048", 00:14:44.048 "ffdhe3072", 00:14:44.048 "ffdhe4096", 00:14:44.048 "ffdhe6144", 00:14:44.048 "ffdhe8192" 00:14:44.048 ] 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "bdev_nvme_set_hotplug", 00:14:44.048 "params": { 00:14:44.048 "period_us": 100000, 00:14:44.048 "enable": false 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "bdev_malloc_create", 00:14:44.048 "params": { 00:14:44.048 "name": "malloc0", 00:14:44.048 "num_blocks": 8192, 00:14:44.048 "block_size": 4096, 00:14:44.048 "physical_block_size": 4096, 00:14:44.048 "uuid": "42435374-e0f6-4e27-b474-f31c1cf27ebb", 00:14:44.048 "optimal_io_boundary": 0, 00:14:44.048 "md_size": 0, 00:14:44.048 "dif_type": 0, 00:14:44.048 "dif_is_head_of_md": false, 00:14:44.048 "dif_pi_format": 0 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "bdev_wait_for_examine" 00:14:44.048 } 00:14:44.048 ] 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "subsystem": "nbd", 00:14:44.048 "config": [] 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "subsystem": "scheduler", 00:14:44.048 "config": [ 00:14:44.048 { 00:14:44.048 "method": "framework_set_scheduler", 00:14:44.048 "params": { 00:14:44.048 "name": "static" 00:14:44.048 } 00:14:44.048 } 00:14:44.048 ] 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "subsystem": "nvmf", 00:14:44.048 "config": [ 00:14:44.048 { 00:14:44.048 "method": "nvmf_set_config", 00:14:44.048 "params": { 00:14:44.048 "discovery_filter": "match_any", 00:14:44.048 "admin_cmd_passthru": { 00:14:44.048 "identify_ctrlr": false 00:14:44.048 } 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "nvmf_set_max_subsystems", 00:14:44.048 "params": { 00:14:44.048 "max_subsystems": 1024 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "nvmf_set_crdt", 00:14:44.048 "params": { 00:14:44.048 "crdt1": 0, 00:14:44.048 "crdt2": 0, 00:14:44.048 "crdt3": 0 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "nvmf_create_transport", 00:14:44.048 "params": { 00:14:44.048 "trtype": "TCP", 00:14:44.048 "max_queue_depth": 128, 00:14:44.048 "max_io_qpairs_per_ctrlr": 127, 00:14:44.048 "in_capsule_data_size": 4096, 00:14:44.048 "max_io_size": 131072, 00:14:44.048 "io_unit_size": 131072, 00:14:44.048 "max_aq_depth": 128, 00:14:44.048 "num_shared_buffers": 511, 00:14:44.048 "buf_cache_size": 4294967295, 00:14:44.048 "dif_insert_or_strip": false, 00:14:44.048 "zcopy": false, 00:14:44.048 "c2h_success": false, 00:14:44.048 "sock_priority": 0, 00:14:44.048 "abort_timeout_sec": 1, 00:14:44.048 "ack_timeout": 0, 00:14:44.048 "data_wr_pool_size": 0 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "nvmf_create_subsystem", 00:14:44.048 "params": { 00:14:44.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.048 "allow_any_host": false, 00:14:44.048 "serial_number": "00000000000000000000", 00:14:44.048 "model_number": "SPDK bdev Controller", 00:14:44.048 "max_namespaces": 32, 00:14:44.048 "min_cntlid": 1, 00:14:44.048 "max_cntlid": 65519, 00:14:44.048 "ana_reporting": false 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "nvmf_subsystem_add_host", 00:14:44.048 "params": { 00:14:44.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.048 "host": "nqn.2016-06.io.spdk:host1", 00:14:44.048 "psk": "key0" 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "nvmf_subsystem_add_ns", 00:14:44.048 "params": { 00:14:44.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.048 "namespace": { 00:14:44.048 "nsid": 1, 00:14:44.048 "bdev_name": "malloc0", 00:14:44.048 "nguid": "42435374E0F64E27B474F31C1CF27EBB", 00:14:44.048 "uuid": "42435374-e0f6-4e27-b474-f31c1cf27ebb", 00:14:44.048 "no_auto_visible": false 00:14:44.048 } 00:14:44.048 } 00:14:44.048 }, 00:14:44.048 { 00:14:44.048 "method": "nvmf_subsystem_add_listener", 00:14:44.048 "params": { 00:14:44.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.048 "listen_address": { 00:14:44.048 "trtype": "TCP", 00:14:44.048 "adrfam": "IPv4", 00:14:44.048 "traddr": "10.0.0.2", 00:14:44.048 "trsvcid": "4420" 00:14:44.048 }, 00:14:44.048 "secure_channel": false, 00:14:44.048 "sock_impl": "ssl" 00:14:44.048 } 00:14:44.048 } 00:14:44.048 ] 00:14:44.048 } 00:14:44.048 ] 00:14:44.048 }' 00:14:44.048 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:44.306 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:44.307 "subsystems": [ 00:14:44.307 { 00:14:44.307 "subsystem": "keyring", 00:14:44.307 "config": [ 00:14:44.307 { 00:14:44.307 "method": "keyring_file_add_key", 00:14:44.307 "params": { 00:14:44.307 "name": "key0", 00:14:44.307 "path": "/tmp/tmp.BEv6v6GMa2" 00:14:44.307 } 00:14:44.307 } 00:14:44.307 ] 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "subsystem": "iobuf", 00:14:44.307 "config": [ 00:14:44.307 { 00:14:44.307 "method": "iobuf_set_options", 00:14:44.307 "params": { 00:14:44.307 "small_pool_count": 8192, 00:14:44.307 "large_pool_count": 1024, 00:14:44.307 "small_bufsize": 8192, 00:14:44.307 "large_bufsize": 135168 00:14:44.307 } 00:14:44.307 } 00:14:44.307 ] 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "subsystem": "sock", 00:14:44.307 "config": [ 00:14:44.307 { 00:14:44.307 "method": "sock_set_default_impl", 00:14:44.307 "params": { 00:14:44.307 "impl_name": "uring" 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "sock_impl_set_options", 00:14:44.307 "params": { 00:14:44.307 "impl_name": "ssl", 00:14:44.307 "recv_buf_size": 4096, 00:14:44.307 "send_buf_size": 4096, 00:14:44.307 "enable_recv_pipe": true, 00:14:44.307 "enable_quickack": false, 00:14:44.307 "enable_placement_id": 0, 00:14:44.307 "enable_zerocopy_send_server": true, 00:14:44.307 "enable_zerocopy_send_client": false, 00:14:44.307 "zerocopy_threshold": 0, 00:14:44.307 "tls_version": 0, 00:14:44.307 "enable_ktls": false 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "sock_impl_set_options", 00:14:44.307 "params": { 00:14:44.307 "impl_name": "posix", 00:14:44.307 "recv_buf_size": 2097152, 00:14:44.307 "send_buf_size": 2097152, 00:14:44.307 "enable_recv_pipe": true, 00:14:44.307 "enable_quickack": false, 00:14:44.307 "enable_placement_id": 0, 00:14:44.307 "enable_zerocopy_send_server": true, 00:14:44.307 "enable_zerocopy_send_client": false, 00:14:44.307 "zerocopy_threshold": 0, 00:14:44.307 "tls_version": 0, 00:14:44.307 "enable_ktls": false 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "sock_impl_set_options", 00:14:44.307 "params": { 00:14:44.307 "impl_name": "uring", 00:14:44.307 "recv_buf_size": 2097152, 00:14:44.307 "send_buf_size": 2097152, 00:14:44.307 "enable_recv_pipe": true, 00:14:44.307 "enable_quickack": false, 00:14:44.307 "enable_placement_id": 0, 00:14:44.307 "enable_zerocopy_send_server": false, 00:14:44.307 "enable_zerocopy_send_client": false, 00:14:44.307 "zerocopy_threshold": 0, 00:14:44.307 "tls_version": 0, 00:14:44.307 "enable_ktls": false 00:14:44.307 } 00:14:44.307 } 00:14:44.307 ] 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "subsystem": "vmd", 00:14:44.307 "config": [] 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "subsystem": "accel", 00:14:44.307 "config": [ 00:14:44.307 { 00:14:44.307 "method": "accel_set_options", 00:14:44.307 "params": { 00:14:44.307 "small_cache_size": 128, 00:14:44.307 "large_cache_size": 16, 00:14:44.307 "task_count": 2048, 00:14:44.307 "sequence_count": 2048, 00:14:44.307 "buf_count": 2048 00:14:44.307 } 00:14:44.307 } 00:14:44.307 ] 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "subsystem": "bdev", 00:14:44.307 "config": [ 00:14:44.307 { 00:14:44.307 "method": "bdev_set_options", 00:14:44.307 "params": { 00:14:44.307 "bdev_io_pool_size": 65535, 00:14:44.307 "bdev_io_cache_size": 256, 00:14:44.307 "bdev_auto_examine": true, 00:14:44.307 "iobuf_small_cache_size": 128, 00:14:44.307 "iobuf_large_cache_size": 16 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "bdev_raid_set_options", 00:14:44.307 "params": { 00:14:44.307 "process_window_size_kb": 1024, 00:14:44.307 "process_max_bandwidth_mb_sec": 0 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "bdev_iscsi_set_options", 00:14:44.307 "params": { 00:14:44.307 "timeout_sec": 30 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "bdev_nvme_set_options", 00:14:44.307 "params": { 00:14:44.307 "action_on_timeout": "none", 00:14:44.307 "timeout_us": 0, 00:14:44.307 "timeout_admin_us": 0, 00:14:44.307 "keep_alive_timeout_ms": 10000, 00:14:44.307 "arbitration_burst": 0, 00:14:44.307 "low_priority_weight": 0, 00:14:44.307 "medium_priority_weight": 0, 00:14:44.307 "high_priority_weight": 0, 00:14:44.307 "nvme_adminq_poll_period_us": 10000, 00:14:44.307 "nvme_ioq_poll_period_us": 0, 00:14:44.307 "io_queue_requests": 512, 00:14:44.307 "delay_cmd_submit": true, 00:14:44.307 "transport_retry_count": 4, 00:14:44.307 "bdev_retry_count": 3, 00:14:44.307 "transport_ack_timeout": 0, 00:14:44.307 "ctrlr_loss_timeout_sec": 0, 00:14:44.307 "reconnect_delay_sec": 0, 00:14:44.307 "fast_io_fail_timeout_sec": 0, 00:14:44.307 "disable_auto_failback": false, 00:14:44.307 "generate_uuids": false, 00:14:44.307 "transport_tos": 0, 00:14:44.307 "nvme_error_stat": false, 00:14:44.307 "rdma_srq_size": 0, 00:14:44.307 "io_path_stat": false, 00:14:44.307 "allow_accel_sequence": false, 00:14:44.307 "rdma_max_cq_size": 0, 00:14:44.307 "rdma_cm_event_timeout_ms": 0, 00:14:44.307 "dhchap_digests": [ 00:14:44.307 "sha256", 00:14:44.307 "sha384", 00:14:44.307 "sha512" 00:14:44.307 ], 00:14:44.307 "dhchap_dhgroups": [ 00:14:44.307 "null", 00:14:44.307 "ffdhe2048", 00:14:44.307 "ffdhe3072", 00:14:44.307 "ffdhe4096", 00:14:44.307 "ffdhe6144", 00:14:44.307 "ffdhe8192" 00:14:44.307 ] 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "bdev_nvme_attach_controller", 00:14:44.307 "params": { 00:14:44.307 "name": "nvme0", 00:14:44.307 "trtype": "TCP", 00:14:44.307 "adrfam": "IPv4", 00:14:44.307 "traddr": "10.0.0.2", 00:14:44.307 "trsvcid": "4420", 00:14:44.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.307 "prchk_reftag": false, 00:14:44.307 "prchk_guard": false, 00:14:44.307 "ctrlr_loss_timeout_sec": 0, 00:14:44.307 "reconnect_delay_sec": 0, 00:14:44.307 "fast_io_fail_timeout_sec": 0, 00:14:44.307 "psk": "key0", 00:14:44.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.307 "hdgst": false, 00:14:44.307 "ddgst": false 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "bdev_nvme_set_hotplug", 00:14:44.307 "params": { 00:14:44.307 "period_us": 100000, 00:14:44.307 "enable": false 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "bdev_enable_histogram", 00:14:44.307 "params": { 00:14:44.307 "name": "nvme0n1", 00:14:44.307 "enable": true 00:14:44.307 } 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "method": "bdev_wait_for_examine" 00:14:44.307 } 00:14:44.307 ] 00:14:44.307 }, 00:14:44.307 { 00:14:44.307 "subsystem": "nbd", 00:14:44.307 "config": [] 00:14:44.307 } 00:14:44.307 ] 00:14:44.307 }' 00:14:44.307 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 73301 00:14:44.307 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73301 ']' 00:14:44.307 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73301 00:14:44.307 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:44.307 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.307 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73301 00:14:44.307 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:44.307 killing process with pid 73301 00:14:44.308 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:44.308 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73301' 00:14:44.308 Received shutdown signal, test time was about 1.000000 seconds 00:14:44.308 00:14:44.308 Latency(us) 00:14:44.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.308 =================================================================================================================== 00:14:44.308 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.308 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73301 00:14:44.308 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73301 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 73269 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73269 ']' 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73269 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73269 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73269' 00:14:44.565 killing process with pid 73269 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73269 00:14:44.565 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73269 00:14:45.131 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:45.131 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.131 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:45.131 "subsystems": [ 00:14:45.131 { 00:14:45.131 "subsystem": "keyring", 00:14:45.131 "config": [ 00:14:45.131 { 00:14:45.131 "method": "keyring_file_add_key", 00:14:45.131 "params": { 00:14:45.131 "name": "key0", 00:14:45.131 "path": "/tmp/tmp.BEv6v6GMa2" 00:14:45.131 } 00:14:45.131 } 00:14:45.131 ] 00:14:45.131 }, 00:14:45.131 { 00:14:45.131 "subsystem": "iobuf", 00:14:45.131 "config": [ 00:14:45.131 { 00:14:45.131 "method": "iobuf_set_options", 00:14:45.131 "params": { 00:14:45.131 "small_pool_count": 8192, 00:14:45.131 "large_pool_count": 1024, 00:14:45.131 "small_bufsize": 8192, 00:14:45.131 "large_bufsize": 135168 00:14:45.131 } 00:14:45.131 } 00:14:45.131 ] 00:14:45.131 }, 00:14:45.131 { 00:14:45.131 "subsystem": "sock", 00:14:45.131 "config": [ 00:14:45.131 { 00:14:45.131 "method": "sock_set_default_impl", 00:14:45.131 "params": { 00:14:45.131 "impl_name": "uring" 00:14:45.131 } 00:14:45.131 }, 00:14:45.131 { 00:14:45.131 "method": "sock_impl_set_options", 00:14:45.131 "params": { 00:14:45.131 "impl_name": "ssl", 00:14:45.131 "recv_buf_size": 4096, 00:14:45.131 "send_buf_size": 4096, 00:14:45.131 "enable_recv_pipe": true, 00:14:45.131 "enable_quickack": false, 00:14:45.131 "enable_placement_id": 0, 00:14:45.131 "enable_zerocopy_send_server": true, 00:14:45.131 "enable_zerocopy_send_client": false, 00:14:45.131 "zerocopy_threshold": 0, 00:14:45.131 "tls_version": 0, 00:14:45.131 "enable_ktls": false 00:14:45.131 } 00:14:45.131 }, 00:14:45.131 { 00:14:45.131 "method": "sock_impl_set_options", 00:14:45.132 "params": { 00:14:45.132 "impl_name": "posix", 00:14:45.132 "recv_buf_size": 2097152, 00:14:45.132 "send_buf_size": 2097152, 00:14:45.132 "enable_recv_pipe": true, 00:14:45.132 "enable_quickack": false, 00:14:45.132 "enable_placement_id": 0, 00:14:45.132 "enable_zerocopy_send_server": true, 00:14:45.132 "enable_zerocopy_send_client": false, 00:14:45.132 "zerocopy_threshold": 0, 00:14:45.132 "tls_version": 0, 00:14:45.132 "enable_ktls": false 00:14:45.132 } 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "method": "sock_impl_set_options", 00:14:45.132 "params": { 00:14:45.132 "impl_name": "uring", 00:14:45.132 "recv_buf_size": 2097152, 00:14:45.132 "send_buf_size": 2097152, 00:14:45.132 "enable_recv_pipe": true, 00:14:45.132 "enable_quickack": false, 00:14:45.132 "enable_placement_id": 0, 00:14:45.132 "enable_zerocopy_send_server": false, 00:14:45.132 "enable_zerocopy_send_client": false, 00:14:45.132 "zerocopy_threshold": 0, 00:14:45.132 "tls_version": 0, 00:14:45.132 "enable_ktls": false 00:14:45.132 } 00:14:45.132 } 00:14:45.132 ] 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "subsystem": "vmd", 00:14:45.132 "config": [] 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "subsystem": "accel", 00:14:45.132 "config": [ 00:14:45.132 { 00:14:45.132 "method": "accel_set_options", 00:14:45.132 "params": { 00:14:45.132 "small_cache_size": 128, 00:14:45.132 "large_cache_size": 16, 00:14:45.132 "task_count": 2048, 00:14:45.132 "sequence_count": 2048, 00:14:45.132 "buf_count": 2048 00:14:45.132 } 00:14:45.132 } 00:14:45.132 ] 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "subsystem": "bdev", 00:14:45.132 "config": [ 00:14:45.132 { 00:14:45.132 "method": "bdev_set_options", 00:14:45.132 "params": { 00:14:45.132 "bdev_io_pool_size": 65535, 00:14:45.132 "bdev_io_cache_size": 256, 00:14:45.132 "bdev_auto_examine": true, 00:14:45.132 "iobuf_small_cache_size": 128, 00:14:45.132 "iobuf_large_cache_size": 16 00:14:45.132 } 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "method": "bdev_raid_set_options", 00:14:45.132 "params": { 00:14:45.132 "process_window_size_kb": 1024, 00:14:45.132 "process_max_bandwidth_mb_sec": 0 00:14:45.132 } 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "method": "bdev_iscsi_set_options", 00:14:45.132 "params": { 00:14:45.132 "timeout_sec": 30 00:14:45.132 } 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "method": "bdev_nvme_set_options", 00:14:45.132 "params": { 00:14:45.132 "action_on_timeout": "none", 00:14:45.132 "timeout_us": 0, 00:14:45.132 "timeout_admin_us": 0, 00:14:45.132 "keep_alive_timeout_ms": 10000, 00:14:45.132 "arbitration_burst": 0, 00:14:45.132 "low_priority_weight": 0, 00:14:45.132 "medium_priority_weight": 0, 00:14:45.132 "high_priority_weight": 0, 00:14:45.132 "nvme_adminq_poll_period_us": 10000, 00:14:45.132 "nvme_ioq_poll_period_us": 0, 00:14:45.132 "io_queue_requests": 0, 00:14:45.132 "delay_cmd_submit": true, 00:14:45.132 "transport_retry_count": 4, 00:14:45.132 "bdev_retry_count": 3, 00:14:45.132 "transport_ack_timeout": 0, 00:14:45.132 "ctrlr_loss_timeout_sec": 0, 00:14:45.132 "reconnect_delay_sec": 0, 00:14:45.132 "fast_io_fail_timeout_sec": 0, 00:14:45.132 "disable_auto_failback": false, 00:14:45.132 "generate_uuids": false, 00:14:45.132 "transport_tos": 0, 00:14:45.132 "nvme_error_stat": false, 00:14:45.132 "rdma_srq_size": 0, 00:14:45.132 "io_path_stat": false, 00:14:45.132 "allow_accel_sequence": false, 00:14:45.132 "rdma_max_cq_size": 0, 00:14:45.132 "rdma_cm_event_timeout_ms": 0, 00:14:45.132 "dhchap_digests": [ 00:14:45.132 "sha256", 00:14:45.132 "sha384", 00:14:45.132 "sha512" 00:14:45.132 ], 00:14:45.132 "dhchap_dhgroups": [ 00:14:45.132 "null", 00:14:45.132 "ffdhe2048", 00:14:45.132 "ffdhe3072", 00:14:45.132 "ffdhe4096", 00:14:45.132 "ffdhe6144", 00:14:45.132 "ffdhe8192" 00:14:45.132 ] 00:14:45.132 } 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "method": "bdev_nvme_set_hotplug", 00:14:45.132 "params": { 00:14:45.132 "period_us": 100000, 00:14:45.132 "enable": false 00:14:45.132 } 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "method": "bdev_malloc_create", 00:14:45.132 "params": { 00:14:45.132 "name": "malloc0", 00:14:45.132 "num_blocks": 8192, 00:14:45.132 "block_size": 4096, 00:14:45.132 "physical_block_size": 4096, 00:14:45.132 "uuid": "42435374-e0f6-4e27-b474-f31c1cf27ebb", 00:14:45.132 "optimal_io_boundary": 0, 00:14:45.132 "md_size": 0, 00:14:45.132 "dif_type": 0, 00:14:45.132 "dif_is_head_of_md": false, 00:14:45.132 "dif_pi_format": 0 00:14:45.132 } 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "method": "bdev_wait_for_examine" 00:14:45.132 } 00:14:45.132 ] 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "subsystem": "nbd", 00:14:45.132 "config": [] 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "subsystem": "scheduler", 00:14:45.132 "config": [ 00:14:45.132 { 00:14:45.132 "method": "framework_set_scheduler", 00:14:45.132 "params": { 00:14:45.132 "name": "static" 00:14:45.132 } 00:14:45.132 } 00:14:45.132 ] 00:14:45.132 }, 00:14:45.132 { 00:14:45.132 "subsystem": "nvmf", 00:14:45.132 "config": [ 00:14:45.132 { 00:14:45.133 "method": "nvmf_set_config", 00:14:45.133 "params": { 00:14:45.133 "discovery_filter": "match_any", 00:14:45.133 "admin_cmd_passthru": { 00:14:45.133 "identify_ctrlr": false 00:14:45.133 } 00:14:45.133 } 00:14:45.133 }, 00:14:45.133 { 00:14:45.133 "method": "nvmf_set_max_subsystems", 00:14:45.133 "params": { 00:14:45.133 "max_subsystems": 1024 00:14:45.133 } 00:14:45.133 }, 00:14:45.133 { 00:14:45.133 "method": "nvmf_set_crdt", 00:14:45.133 "params": { 00:14:45.133 "crdt1": 0, 00:14:45.133 "crdt2": 0, 00:14:45.133 "crdt3": 0 00:14:45.133 } 00:14:45.133 }, 00:14:45.133 { 00:14:45.133 "method": "nvmf_create_transport", 00:14:45.133 "params": { 00:14:45.133 "trtype": "TCP", 00:14:45.133 "max_queue_depth": 128, 00:14:45.133 "max_io_qpairs_per_ctrlr": 127, 00:14:45.133 "in_capsule_data_size": 4096, 00:14:45.133 "max_io_size": 131072, 00:14:45.133 "io_unit_size": 131072, 00:14:45.133 "max_aq_depth": 128, 00:14:45.133 "num_shared_buffers": 511, 00:14:45.133 "buf_cache_size": 4294967295, 00:14:45.133 "dif_insert_or_strip": false, 00:14:45.133 "zcopy": false, 00:14:45.133 "c2h_success": false, 00:14:45.133 "sock_priority": 0, 00:14:45.133 "abort_timeout_sec": 1, 00:14:45.133 "ack_timeout": 0, 00:14:45.133 "data_wr_pool_size": 0 00:14:45.133 } 00:14:45.133 }, 00:14:45.133 { 00:14:45.133 "method": "nvmf_create_subsystem", 00:14:45.133 "params": { 00:14:45.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.133 "allow_any_host": false, 00:14:45.133 "serial_number": "00000000000000000000", 00:14:45.133 "model_number": "SPDK bdev Controller", 00:14:45.133 "max_namespaces": 32, 00:14:45.133 "min_cntlid": 1, 00:14:45.133 "max_cntlid": 65519, 00:14:45.133 "ana_reporting": false 00:14:45.133 } 00:14:45.133 }, 00:14:45.133 { 00:14:45.133 "method": "nvmf_subsystem_add_host", 00:14:45.133 "params": { 00:14:45.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.133 "host": "nqn.2016-06.io.spdk:host1", 00:14:45.133 "psk": "key0" 00:14:45.133 } 00:14:45.133 }, 00:14:45.133 { 00:14:45.133 "method": "nvmf_subsystem_add_ns", 00:14:45.133 "params": { 00:14:45.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.133 "namespace": { 00:14:45.133 "nsid": 1, 00:14:45.133 "bdev_name": "malloc0", 00:14:45.133 "nguid": "42435374E0F64E27B474F31C1CF27EBB", 00:14:45.133 "uuid": "42435374-e0f6-4e27-b474-f31c1cf27ebb", 00:14:45.133 "no_auto_visible": false 00:14:45.133 } 00:14:45.133 } 00:14:45.133 }, 00:14:45.133 { 00:14:45.133 "method": "nvmf_subsystem_add_listener", 00:14:45.133 "params": { 00:14:45.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.133 "listen_address": { 00:14:45.133 "trtype": "TCP", 00:14:45.133 "adrfam": "IPv4", 00:14:45.133 "traddr": "10.0.0.2", 00:14:45.133 "trsvcid": "4420" 00:14:45.133 }, 00:14:45.133 "secure_channel": false, 00:14:45.133 "sock_impl": "ssl" 00:14:45.133 } 00:14:45.133 } 00:14:45.133 ] 00:14:45.133 } 00:14:45.133 ] 00:14:45.133 }' 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73362 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73362 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73362 ']' 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.133 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.133 [2024-07-25 13:57:33.995418] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:45.133 [2024-07-25 13:57:33.995524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.133 [2024-07-25 13:57:34.137887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.392 [2024-07-25 13:57:34.317206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.392 [2024-07-25 13:57:34.317277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.392 [2024-07-25 13:57:34.317289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.392 [2024-07-25 13:57:34.317298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.392 [2024-07-25 13:57:34.317318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.392 [2024-07-25 13:57:34.317424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.650 [2024-07-25 13:57:34.506035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.650 [2024-07-25 13:57:34.599466] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.650 [2024-07-25 13:57:34.631368] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:45.650 [2024-07-25 13:57:34.642564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.908 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.908 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:45.908 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.908 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:45.908 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.166 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73394 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73394 /var/tmp/bdevperf.sock 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73394 ']' 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:46.167 "subsystems": [ 00:14:46.167 { 00:14:46.167 "subsystem": "keyring", 00:14:46.167 "config": [ 00:14:46.167 { 00:14:46.167 "method": "keyring_file_add_key", 00:14:46.167 "params": { 00:14:46.167 "name": "key0", 00:14:46.167 "path": "/tmp/tmp.BEv6v6GMa2" 00:14:46.167 } 00:14:46.167 } 00:14:46.167 ] 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "subsystem": "iobuf", 00:14:46.167 "config": [ 00:14:46.167 { 00:14:46.167 "method": "iobuf_set_options", 00:14:46.167 "params": { 00:14:46.167 "small_pool_count": 8192, 00:14:46.167 "large_pool_count": 1024, 00:14:46.167 "small_bufsize": 8192, 00:14:46.167 "large_bufsize": 135168 00:14:46.167 } 00:14:46.167 } 00:14:46.167 ] 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "subsystem": "sock", 00:14:46.167 "config": [ 00:14:46.167 { 00:14:46.167 "method": "sock_set_default_impl", 00:14:46.167 "params": { 00:14:46.167 "impl_name": "uring" 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "sock_impl_set_options", 00:14:46.167 "params": { 00:14:46.167 "impl_name": "ssl", 00:14:46.167 "recv_buf_size": 4096, 00:14:46.167 "send_buf_size": 4096, 00:14:46.167 "enable_recv_pipe": true, 00:14:46.167 "enable_quickack": false, 00:14:46.167 "enable_placement_id": 0, 00:14:46.167 "enable_zerocopy_send_server": true, 00:14:46.167 "enable_zerocopy_send_client": false, 00:14:46.167 "zerocopy_threshold": 0, 00:14:46.167 "tls_version": 0, 00:14:46.167 "enable_ktls": false 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "sock_impl_set_options", 00:14:46.167 "params": { 00:14:46.167 "impl_name": "posix", 00:14:46.167 "recv_buf_size": 2097152, 00:14:46.167 "send_buf_size": 2097152, 00:14:46.167 "enable_recv_pipe": true, 00:14:46.167 "enable_quickack": false, 00:14:46.167 "enable_placement_id": 0, 00:14:46.167 "enable_zerocopy_send_server": true, 00:14:46.167 "enable_zerocopy_send_client": false, 00:14:46.167 "zerocopy_threshold": 0, 00:14:46.167 "tls_version": 0, 00:14:46.167 "enable_ktls": false 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "sock_impl_set_options", 00:14:46.167 "params": { 00:14:46.167 "impl_name": "uring", 00:14:46.167 "recv_buf_size": 2097152, 00:14:46.167 "send_buf_size": 2097152, 00:14:46.167 "enable_recv_pipe": true, 00:14:46.167 "enable_quickack": false, 00:14:46.167 "enable_placement_id": 0, 00:14:46.167 "enable_zerocopy_send_server": false, 00:14:46.167 "enable_zerocopy_send_client": false, 00:14:46.167 "zerocopy_threshold": 0, 00:14:46.167 "tls_version": 0, 00:14:46.167 "enable_ktls": false 00:14:46.167 } 00:14:46.167 } 00:14:46.167 ] 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "subsystem": "vmd", 00:14:46.167 "config": [] 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "subsystem": "accel", 00:14:46.167 "config": [ 00:14:46.167 { 00:14:46.167 "method": "accel_set_options", 00:14:46.167 "params": { 00:14:46.167 "small_cache_size": 128, 00:14:46.167 "large_cache_size": 16, 00:14:46.167 "task_count": 2048, 00:14:46.167 "sequence_count": 2048, 00:14:46.167 "buf_count": 2048 00:14:46.167 } 00:14:46.167 } 00:14:46.167 ] 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "subsystem": "bdev", 00:14:46.167 "config": [ 00:14:46.167 { 00:14:46.167 "method": "bdev_set_options", 00:14:46.167 "params": { 00:14:46.167 "bdev_io_pool_size": 65535, 00:14:46.167 "bdev_io_cache_size": 256, 00:14:46.167 "bdev_auto_examine": true, 00:14:46.167 "iobuf_small_cache_size": 128, 00:14:46.167 "iobuf_large_cache_size": 16 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "bdev_raid_set_options", 00:14:46.167 "params": { 00:14:46.167 "process_window_size_kb": 1024, 00:14:46.167 "process_max_bandwidth_mb_sec": 0 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "bdev_iscsi_set_options", 00:14:46.167 "params": { 00:14:46.167 "timeout_sec": 30 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "bdev_nvme_set_options", 00:14:46.167 "params": { 00:14:46.167 "action_on_timeout": "none", 00:14:46.167 "timeout_us": 0, 00:14:46.167 "timeout_admin_us": 0, 00:14:46.167 "keep_alive_timeout_ms": 10000, 00:14:46.167 "arbitration_burst": 0, 00:14:46.167 "low_priority_weight": 0, 00:14:46.167 "medium_priority_weight": 0, 00:14:46.167 "high_priority_weight": 0, 00:14:46.167 "nvme_adminq_poll_period_us": 10000, 00:14:46.167 "nvme_ioq_poll_period_us": 0, 00:14:46.167 "io_queue_requests": 512, 00:14:46.167 "delay_cmd_submit": true, 00:14:46.167 "transport_retry_count": 4, 00:14:46.167 "bdev_retry_count": 3, 00:14:46.167 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:46.167 "transport_ack_timeout": 0, 00:14:46.167 "ctrlr_loss_timeout_sec": 0, 00:14:46.167 "reconnect_delay_sec": 0, 00:14:46.167 "fast_io_fail_timeout_sec": 0, 00:14:46.167 "disable_auto_failback": false, 00:14:46.167 "generate_uuids": false, 00:14:46.167 "transport_tos": 0, 00:14:46.167 "nvme_error_stat": false, 00:14:46.167 "rdma_srq_size": 0, 00:14:46.167 "io_path_stat": false, 00:14:46.167 "allow_accel_sequence": false, 00:14:46.167 "rdma_max_cq_size": 0, 00:14:46.167 "rdma_cm_event_timeout_ms": 0, 00:14:46.167 "dhchap_digests": [ 00:14:46.167 "sha256", 00:14:46.167 "sha384", 00:14:46.167 "sha512" 00:14:46.167 ], 00:14:46.167 "dhchap_dhgroups": [ 00:14:46.167 "null", 00:14:46.167 "ffdhe2048", 00:14:46.167 "ffdhe3072", 00:14:46.167 "ffdhe4096", 00:14:46.167 "ffdhe6144", 00:14:46.167 "ffdhe8192" 00:14:46.167 ] 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "bdev_nvme_attach_controller", 00:14:46.167 "params": { 00:14:46.167 "name": "nvme0", 00:14:46.167 "trtype": "TCP", 00:14:46.167 "adrfam": "IPv4", 00:14:46.167 "traddr": "10.0.0.2", 00:14:46.167 "trsvcid": "4420", 00:14:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.167 "prchk_reftag": false, 00:14:46.167 "prchk_guard": false, 00:14:46.167 "ctrlr_loss_timeout_sec": 0, 00:14:46.167 "reconnect_delay_sec": 0, 00:14:46.167 "fast_io_fail_timeout_sec": 0, 00:14:46.167 "psk": "key0", 00:14:46.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.167 "hdgst": false, 00:14:46.167 "ddgst": false 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "bdev_nvme_set_hotplug", 00:14:46.167 "params": { 00:14:46.167 "period_us": 100000, 00:14:46.167 "enable": false 00:14:46.167 } 00:14:46.167 }, 00:14:46.167 { 00:14:46.167 "method": "bdev_enable_histogram", 00:14:46.167 "params": { 00:14:46.167 "name": "nvme0n1", 00:14:46.167 "enable": true 00:14:46.167 } 00:14:46.168 }, 00:14:46.168 { 00:14:46.168 "method": "bdev_wait_for_examine" 00:14:46.168 } 00:14:46.168 ] 00:14:46.168 }, 00:14:46.168 { 00:14:46.168 "subsystem": "nbd", 00:14:46.168 "config": [] 00:14:46.168 } 00:14:46.168 ] 00:14:46.168 }' 00:14:46.168 [2024-07-25 13:57:35.027113] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:46.168 [2024-07-25 13:57:35.027225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73394 ] 00:14:46.168 [2024-07-25 13:57:35.192662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.426 [2024-07-25 13:57:35.339005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.684 [2024-07-25 13:57:35.475151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:46.684 [2024-07-25 13:57:35.523900] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.942 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.942 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:46.942 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:46.942 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:47.507 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.507 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.508 Running I/O for 1 seconds... 00:14:48.457 00:14:48.457 Latency(us) 00:14:48.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.457 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:48.457 Verification LBA range: start 0x0 length 0x2000 00:14:48.457 nvme0n1 : 1.02 3428.70 13.39 0.00 0.00 36841.73 6970.65 27763.43 00:14:48.457 =================================================================================================================== 00:14:48.457 Total : 3428.70 13.39 0.00 0.00 36841.73 6970.65 27763.43 00:14:48.457 0 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:48.457 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:48.457 nvmf_trace.0 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73394 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73394 ']' 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73394 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73394 00:14:48.714 killing process with pid 73394 00:14:48.714 Received shutdown signal, test time was about 1.000000 seconds 00:14:48.714 00:14:48.714 Latency(us) 00:14:48.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.714 =================================================================================================================== 00:14:48.714 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73394' 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73394 00:14:48.714 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73394 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.970 rmmod nvme_tcp 00:14:48.970 rmmod nvme_fabrics 00:14:48.970 rmmod nvme_keyring 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73362 ']' 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73362 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73362 ']' 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73362 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73362 00:14:48.970 killing process with pid 73362 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73362' 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73362 00:14:48.970 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73362 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pkkWiAzws6 /tmp/tmp.OWcxyqJAMk /tmp/tmp.BEv6v6GMa2 00:14:49.228 00:14:49.228 real 1m29.064s 00:14:49.228 user 2m23.556s 00:14:49.228 sys 0m27.900s 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.228 ************************************ 00:14:49.228 END TEST nvmf_tls 00:14:49.228 ************************************ 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:49.228 ************************************ 00:14:49.228 START TEST nvmf_fips 00:14:49.228 ************************************ 00:14:49.228 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:49.487 * Looking for test storage... 00:14:49.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.487 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:49.488 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:49.489 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:49.747 Error setting digest 00:14:49.747 00723BF32B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:49.747 00723BF32B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:49.747 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:49.748 Cannot find device "nvmf_tgt_br" 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.748 Cannot find device "nvmf_tgt_br2" 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:49.748 Cannot find device "nvmf_tgt_br" 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:49.748 Cannot find device "nvmf_tgt_br2" 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.748 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:50.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:14:50.006 00:14:50.006 --- 10.0.0.2 ping statistics --- 00:14:50.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.006 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:50.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:50.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:50.006 00:14:50.006 --- 10.0.0.3 ping statistics --- 00:14:50.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.006 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:50.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:50.006 00:14:50.006 --- 10.0.0.1 ping statistics --- 00:14:50.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.006 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.006 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73662 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73662 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73662 ']' 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.007 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:50.007 [2024-07-25 13:57:38.988498] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:50.007 [2024-07-25 13:57:38.988603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.265 [2024-07-25 13:57:39.128697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.265 [2024-07-25 13:57:39.291419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.265 [2024-07-25 13:57:39.291489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.265 [2024-07-25 13:57:39.291504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.265 [2024-07-25 13:57:39.291516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.265 [2024-07-25 13:57:39.291526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.265 [2024-07-25 13:57:39.291566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.522 [2024-07-25 13:57:39.370841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:51.119 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.119 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:51.119 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.119 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:51.119 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:51.120 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.376 [2024-07-25 13:57:40.305313] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.376 [2024-07-25 13:57:40.321242] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:51.376 [2024-07-25 13:57:40.321580] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.376 [2024-07-25 13:57:40.357616] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:51.376 malloc0 00:14:51.376 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.376 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73697 00:14:51.376 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:51.376 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73697 /var/tmp/bdevperf.sock 00:14:51.376 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73697 ']' 00:14:51.376 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.376 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.377 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.377 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.377 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 [2024-07-25 13:57:40.492512] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:51.633 [2024-07-25 13:57:40.492654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73697 ] 00:14:51.633 [2024-07-25 13:57:40.639183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.890 [2024-07-25 13:57:40.774885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.890 [2024-07-25 13:57:40.833678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:52.823 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.823 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:52.823 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:52.823 [2024-07-25 13:57:41.756386] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.823 [2024-07-25 13:57:41.756535] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:52.823 TLSTESTn1 00:14:52.823 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.082 Running I/O for 10 seconds... 00:15:03.048 00:15:03.048 Latency(us) 00:15:03.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.048 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:03.048 Verification LBA range: start 0x0 length 0x2000 00:15:03.048 TLSTESTn1 : 10.02 3755.66 14.67 0.00 0.00 34006.65 8102.63 37415.10 00:15:03.048 =================================================================================================================== 00:15:03.048 Total : 3755.66 14.67 0.00 0.00 34006.65 8102.63 37415.10 00:15:03.048 0 00:15:03.048 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:03.048 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:03.048 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:03.048 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:03.048 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:03.048 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:03.048 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:03.048 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:03.049 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:03.049 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:03.049 nvmf_trace.0 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73697 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73697 ']' 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73697 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73697 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:03.351 killing process with pid 73697 00:15:03.351 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.351 00:15:03.351 Latency(us) 00:15:03.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.351 =================================================================================================================== 00:15:03.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73697' 00:15:03.351 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73697 00:15:03.351 [2024-07-25 13:57:52.179696] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:03.352 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73697 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.610 rmmod nvme_tcp 00:15:03.610 rmmod nvme_fabrics 00:15:03.610 rmmod nvme_keyring 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73662 ']' 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73662 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73662 ']' 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73662 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73662 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73662' 00:15:03.610 killing process with pid 73662 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73662 00:15:03.610 [2024-07-25 13:57:52.540522] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:03.610 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73662 00:15:03.869 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:03.869 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:03.869 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:03.869 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.869 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.869 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.869 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.869 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:04.127 00:15:04.127 real 0m14.684s 00:15:04.127 user 0m19.797s 00:15:04.127 sys 0m6.114s 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:04.127 ************************************ 00:15:04.127 END TEST nvmf_fips 00:15:04.127 ************************************ 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:15:04.127 00:15:04.127 real 4m42.391s 00:15:04.127 user 9m53.193s 00:15:04.127 sys 1m2.992s 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.127 13:57:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.127 ************************************ 00:15:04.127 END TEST nvmf_target_extra 00:15:04.127 ************************************ 00:15:04.127 13:57:53 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:04.127 13:57:53 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:04.127 13:57:53 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.127 13:57:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:04.127 ************************************ 00:15:04.127 START TEST nvmf_host 00:15:04.127 ************************************ 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:04.127 * Looking for test storage... 00:15:04.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.127 13:57:53 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:04.128 ************************************ 00:15:04.128 START TEST nvmf_identify 00:15:04.128 ************************************ 00:15:04.128 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:04.387 * Looking for test storage... 00:15:04.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.387 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:04.388 Cannot find device "nvmf_tgt_br" 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:04.388 Cannot find device "nvmf_tgt_br2" 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:04.388 Cannot find device "nvmf_tgt_br" 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:04.388 Cannot find device "nvmf_tgt_br2" 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:04.388 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:04.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:15:04.647 00:15:04.647 --- 10.0.0.2 ping statistics --- 00:15:04.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.647 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:04.647 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.647 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:04.647 00:15:04.647 --- 10.0.0.3 ping statistics --- 00:15:04.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.647 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:04.647 00:15:04.647 --- 10.0.0.1 ping statistics --- 00:15:04.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.647 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74073 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74073 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74073 ']' 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.647 13:57:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:04.647 [2024-07-25 13:57:53.669747] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:04.647 [2024-07-25 13:57:53.669871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.905 [2024-07-25 13:57:53.811424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.163 [2024-07-25 13:57:53.963902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.163 [2024-07-25 13:57:53.963999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.163 [2024-07-25 13:57:53.964017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.163 [2024-07-25 13:57:53.964030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.163 [2024-07-25 13:57:53.964042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.163 [2024-07-25 13:57:53.964289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.163 [2024-07-25 13:57:53.964801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.163 [2024-07-25 13:57:53.964894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.163 [2024-07-25 13:57:53.964913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.163 [2024-07-25 13:57:54.019809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.729 [2024-07-25 13:57:54.685100] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.729 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.990 Malloc0 00:15:05.990 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.990 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.990 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.990 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.990 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.991 [2024-07-25 13:57:54.800123] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.991 [ 00:15:05.991 { 00:15:05.991 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:05.991 "subtype": "Discovery", 00:15:05.991 "listen_addresses": [ 00:15:05.991 { 00:15:05.991 "trtype": "TCP", 00:15:05.991 "adrfam": "IPv4", 00:15:05.991 "traddr": "10.0.0.2", 00:15:05.991 "trsvcid": "4420" 00:15:05.991 } 00:15:05.991 ], 00:15:05.991 "allow_any_host": true, 00:15:05.991 "hosts": [] 00:15:05.991 }, 00:15:05.991 { 00:15:05.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.991 "subtype": "NVMe", 00:15:05.991 "listen_addresses": [ 00:15:05.991 { 00:15:05.991 "trtype": "TCP", 00:15:05.991 "adrfam": "IPv4", 00:15:05.991 "traddr": "10.0.0.2", 00:15:05.991 "trsvcid": "4420" 00:15:05.991 } 00:15:05.991 ], 00:15:05.991 "allow_any_host": true, 00:15:05.991 "hosts": [], 00:15:05.991 "serial_number": "SPDK00000000000001", 00:15:05.991 "model_number": "SPDK bdev Controller", 00:15:05.991 "max_namespaces": 32, 00:15:05.991 "min_cntlid": 1, 00:15:05.991 "max_cntlid": 65519, 00:15:05.991 "namespaces": [ 00:15:05.991 { 00:15:05.991 "nsid": 1, 00:15:05.991 "bdev_name": "Malloc0", 00:15:05.991 "name": "Malloc0", 00:15:05.991 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:05.991 "eui64": "ABCDEF0123456789", 00:15:05.991 "uuid": "c2237bdd-c7b8-4199-8583-f8fc36bd28cf" 00:15:05.991 } 00:15:05.991 ] 00:15:05.991 } 00:15:05.991 ] 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.991 13:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:05.991 [2024-07-25 13:57:54.847672] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:05.991 [2024-07-25 13:57:54.847738] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74108 ] 00:15:05.991 [2024-07-25 13:57:54.988677] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:05.991 [2024-07-25 13:57:54.988770] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:05.991 [2024-07-25 13:57:54.988777] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:05.991 [2024-07-25 13:57:54.988791] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:05.991 [2024-07-25 13:57:54.988802] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:05.991 [2024-07-25 13:57:54.988968] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:05.991 [2024-07-25 13:57:54.989018] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14fd2c0 0 00:15:05.991 [2024-07-25 13:57:54.993338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:05.991 [2024-07-25 13:57:54.993370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:05.991 [2024-07-25 13:57:54.993377] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:05.991 [2024-07-25 13:57:54.993381] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:05.991 [2024-07-25 13:57:54.993436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.993444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.993448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.991 [2024-07-25 13:57:54.993465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:05.991 [2024-07-25 13:57:54.993504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.991 [2024-07-25 13:57:54.998339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.991 [2024-07-25 13:57:54.998365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.991 [2024-07-25 13:57:54.998371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.991 [2024-07-25 13:57:54.998392] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:05.991 [2024-07-25 13:57:54.998402] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:05.991 [2024-07-25 13:57:54.998408] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:05.991 [2024-07-25 13:57:54.998431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.991 [2024-07-25 13:57:54.998455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.991 [2024-07-25 13:57:54.998485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.991 [2024-07-25 13:57:54.998544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.991 [2024-07-25 13:57:54.998551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.991 [2024-07-25 13:57:54.998555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.991 [2024-07-25 13:57:54.998565] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:05.991 [2024-07-25 13:57:54.998573] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:05.991 [2024-07-25 13:57:54.998582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.991 [2024-07-25 13:57:54.998598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.991 [2024-07-25 13:57:54.998616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.991 [2024-07-25 13:57:54.998660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.991 [2024-07-25 13:57:54.998667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.991 [2024-07-25 13:57:54.998671] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.991 [2024-07-25 13:57:54.998681] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:05.991 [2024-07-25 13:57:54.998690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:05.991 [2024-07-25 13:57:54.998697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.991 [2024-07-25 13:57:54.998713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.991 [2024-07-25 13:57:54.998731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.991 [2024-07-25 13:57:54.998779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.991 [2024-07-25 13:57:54.998796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.991 [2024-07-25 13:57:54.998801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.991 [2024-07-25 13:57:54.998811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:05.991 [2024-07-25 13:57:54.998822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.991 [2024-07-25 13:57:54.998839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.991 [2024-07-25 13:57:54.998857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.991 [2024-07-25 13:57:54.998904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.991 [2024-07-25 13:57:54.998911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.991 [2024-07-25 13:57:54.998915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.991 [2024-07-25 13:57:54.998919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.992 [2024-07-25 13:57:54.998924] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:05.992 [2024-07-25 13:57:54.998929] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:05.992 [2024-07-25 13:57:54.998938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:05.992 [2024-07-25 13:57:54.999044] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:05.992 [2024-07-25 13:57:54.999057] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:05.992 [2024-07-25 13:57:54.999067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.992 [2024-07-25 13:57:54.999102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.992 [2024-07-25 13:57:54.999159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.992 [2024-07-25 13:57:54.999166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.992 [2024-07-25 13:57:54.999170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.992 [2024-07-25 13:57:54.999179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:05.992 [2024-07-25 13:57:54.999190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.992 [2024-07-25 13:57:54.999222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.992 [2024-07-25 13:57:54.999265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.992 [2024-07-25 13:57:54.999271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.992 [2024-07-25 13:57:54.999275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.992 [2024-07-25 13:57:54.999284] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:05.992 [2024-07-25 13:57:54.999290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:05.992 [2024-07-25 13:57:54.999298] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:05.992 [2024-07-25 13:57:54.999323] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:05.992 [2024-07-25 13:57:54.999336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.992 [2024-07-25 13:57:54.999369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.992 [2024-07-25 13:57:54.999479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.992 [2024-07-25 13:57:54.999492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.992 [2024-07-25 13:57:54.999497] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999501] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14fd2c0): datao=0, datal=4096, cccid=0 00:15:05.992 [2024-07-25 13:57:54.999507] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153e940) on tqpair(0x14fd2c0): expected_datao=0, payload_size=4096 00:15:05.992 [2024-07-25 13:57:54.999512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999521] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999526] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.992 [2024-07-25 13:57:54.999542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.992 [2024-07-25 13:57:54.999546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.992 [2024-07-25 13:57:54.999561] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:05.992 [2024-07-25 13:57:54.999566] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:05.992 [2024-07-25 13:57:54.999571] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:05.992 [2024-07-25 13:57:54.999582] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:05.992 [2024-07-25 13:57:54.999587] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:05.992 [2024-07-25 13:57:54.999593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:05.992 [2024-07-25 13:57:54.999602] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:05.992 [2024-07-25 13:57:54.999611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:05.992 [2024-07-25 13:57:54.999648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.992 [2024-07-25 13:57:54.999702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.992 [2024-07-25 13:57:54.999708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.992 [2024-07-25 13:57:54.999712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.992 [2024-07-25 13:57:54.999725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.992 [2024-07-25 13:57:54.999747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.992 [2024-07-25 13:57:54.999768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.992 [2024-07-25 13:57:54.999789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.992 [2024-07-25 13:57:54.999829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:05.992 [2024-07-25 13:57:54.999839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:05.992 [2024-07-25 13:57:54.999847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:54.999851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14fd2c0) 00:15:05.992 [2024-07-25 13:57:54.999859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.992 [2024-07-25 13:57:54.999890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153e940, cid 0, qid 0 00:15:05.992 [2024-07-25 13:57:54.999899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153eac0, cid 1, qid 0 00:15:05.992 [2024-07-25 13:57:54.999904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153ec40, cid 2, qid 0 00:15:05.992 [2024-07-25 13:57:54.999909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.992 [2024-07-25 13:57:54.999914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153ef40, cid 4, qid 0 00:15:05.992 [2024-07-25 13:57:54.999998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.992 [2024-07-25 13:57:55.000005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.992 [2024-07-25 13:57:55.000009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.992 [2024-07-25 13:57:55.000013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153ef40) on tqpair=0x14fd2c0 00:15:05.992 [2024-07-25 13:57:55.000019] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:05.992 [2024-07-25 13:57:55.000024] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:05.993 [2024-07-25 13:57:55.000037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14fd2c0) 00:15:05.993 [2024-07-25 13:57:55.000049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.993 [2024-07-25 13:57:55.000067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153ef40, cid 4, qid 0 00:15:05.993 [2024-07-25 13:57:55.000136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.993 [2024-07-25 13:57:55.000153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.993 [2024-07-25 13:57:55.000157] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000161] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14fd2c0): datao=0, datal=4096, cccid=4 00:15:05.993 [2024-07-25 13:57:55.000166] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153ef40) on tqpair(0x14fd2c0): expected_datao=0, payload_size=4096 00:15:05.993 [2024-07-25 13:57:55.000171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000178] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000183] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.993 [2024-07-25 13:57:55.000197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.993 [2024-07-25 13:57:55.000201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153ef40) on tqpair=0x14fd2c0 00:15:05.993 [2024-07-25 13:57:55.000220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:05.993 [2024-07-25 13:57:55.000252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14fd2c0) 00:15:05.993 [2024-07-25 13:57:55.000266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.993 [2024-07-25 13:57:55.000275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000283] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14fd2c0) 00:15:05.993 [2024-07-25 13:57:55.000289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.993 [2024-07-25 13:57:55.000329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153ef40, cid 4, qid 0 00:15:05.993 [2024-07-25 13:57:55.000339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153f0c0, cid 5, qid 0 00:15:05.993 [2024-07-25 13:57:55.000456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.993 [2024-07-25 13:57:55.000468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.993 [2024-07-25 13:57:55.000472] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000476] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14fd2c0): datao=0, datal=1024, cccid=4 00:15:05.993 [2024-07-25 13:57:55.000481] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153ef40) on tqpair(0x14fd2c0): expected_datao=0, payload_size=1024 00:15:05.993 [2024-07-25 13:57:55.000486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000493] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000498] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.993 [2024-07-25 13:57:55.000510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.993 [2024-07-25 13:57:55.000514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153f0c0) on tqpair=0x14fd2c0 00:15:05.993 [2024-07-25 13:57:55.000536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.993 [2024-07-25 13:57:55.000544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.993 [2024-07-25 13:57:55.000547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153ef40) on tqpair=0x14fd2c0 00:15:05.993 [2024-07-25 13:57:55.000564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14fd2c0) 00:15:05.993 [2024-07-25 13:57:55.000577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.993 [2024-07-25 13:57:55.000600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153ef40, cid 4, qid 0 00:15:05.993 [2024-07-25 13:57:55.000673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.993 [2024-07-25 13:57:55.000684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.993 [2024-07-25 13:57:55.000689] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000693] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14fd2c0): datao=0, datal=3072, cccid=4 00:15:05.993 [2024-07-25 13:57:55.000697] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153ef40) on tqpair(0x14fd2c0): expected_datao=0, payload_size=3072 00:15:05.993 [2024-07-25 13:57:55.000702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000709] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000714] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.993 [2024-07-25 13:57:55.000729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.993 [2024-07-25 13:57:55.000732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153ef40) on tqpair=0x14fd2c0 00:15:05.993 [2024-07-25 13:57:55.000747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14fd2c0) 00:15:05.993 [2024-07-25 13:57:55.000759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.993 [2024-07-25 13:57:55.000782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153ef40, cid 4, qid 0 00:15:05.993 [2024-07-25 13:57:55.000848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.993 [2024-07-25 13:57:55.000855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.993 [2024-07-25 13:57:55.000859] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000863] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14fd2c0): datao=0, datal=8, cccid=4 00:15:05.993 [2024-07-25 13:57:55.000868] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153ef40) on tqpair(0x14fd2c0): expected_datao=0, payload_size=8 00:15:05.993 [2024-07-25 13:57:55.000873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000880] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.993 [2024-07-25 13:57:55.000884] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.993 ===================================================== 00:15:05.993 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:05.993 ===================================================== 00:15:05.993 Controller Capabilities/Features 00:15:05.993 ================================ 00:15:05.993 Vendor ID: 0000 00:15:05.993 Subsystem Vendor ID: 0000 00:15:05.993 Serial Number: .................... 00:15:05.993 Model Number: ........................................ 00:15:05.993 Firmware Version: 24.09 00:15:05.993 Recommended Arb Burst: 0 00:15:05.993 IEEE OUI Identifier: 00 00 00 00:15:05.993 Multi-path I/O 00:15:05.993 May have multiple subsystem ports: No 00:15:05.993 May have multiple controllers: No 00:15:05.993 Associated with SR-IOV VF: No 00:15:05.993 Max Data Transfer Size: 131072 00:15:05.993 Max Number of Namespaces: 0 00:15:05.993 Max Number of I/O Queues: 1024 00:15:05.993 NVMe Specification Version (VS): 1.3 00:15:05.993 NVMe Specification Version (Identify): 1.3 00:15:05.993 Maximum Queue Entries: 128 00:15:05.993 Contiguous Queues Required: Yes 00:15:05.993 Arbitration Mechanisms Supported 00:15:05.993 Weighted Round Robin: Not Supported 00:15:05.993 Vendor Specific: Not Supported 00:15:05.993 Reset Timeout: 15000 ms 00:15:05.993 Doorbell Stride: 4 bytes 00:15:05.993 NVM Subsystem Reset: Not Supported 00:15:05.993 Command Sets Supported 00:15:05.993 NVM Command Set: Supported 00:15:05.993 Boot Partition: Not Supported 00:15:05.993 Memory Page Size Minimum: 4096 bytes 00:15:05.993 Memory Page Size Maximum: 4096 bytes 00:15:05.993 Persistent Memory Region: Not Supported 00:15:05.993 Optional Asynchronous Events Supported 00:15:05.993 Namespace Attribute Notices: Not Supported 00:15:05.993 Firmware Activation Notices: Not Supported 00:15:05.993 ANA Change Notices: Not Supported 00:15:05.993 PLE Aggregate Log Change Notices: Not Supported 00:15:05.993 LBA Status Info Alert Notices: Not Supported 00:15:05.993 EGE Aggregate Log Change Notices: Not Supported 00:15:05.993 Normal NVM Subsystem Shutdown event: Not Supported 00:15:05.993 Zone Descriptor Change Notices: Not Supported 00:15:05.993 Discovery Log Change Notices: Supported 00:15:05.993 Controller Attributes 00:15:05.993 128-bit Host Identifier: Not Supported 00:15:05.993 Non-Operational Permissive Mode: Not Supported 00:15:05.993 NVM Sets: Not Supported 00:15:05.993 Read Recovery Levels: Not Supported 00:15:05.993 Endurance Groups: Not Supported 00:15:05.993 Predictable Latency Mode: Not Supported 00:15:05.993 Traffic Based Keep ALive: Not Supported 00:15:05.993 Namespace Granularity: Not Supported 00:15:05.993 SQ Associations: Not Supported 00:15:05.993 UUID List: Not Supported 00:15:05.993 Multi-Domain Subsystem: Not Supported 00:15:05.994 Fixed Capacity Management: Not Supported 00:15:05.994 Variable Capacity Management: Not Supported 00:15:05.994 Delete Endurance Group: Not Supported 00:15:05.994 Delete NVM Set: Not Supported 00:15:05.994 Extended LBA Formats Supported: Not Supported 00:15:05.994 Flexible Data Placement Supported: Not Supported 00:15:05.994 00:15:05.994 Controller Memory Buffer Support 00:15:05.994 ================================ 00:15:05.994 Supported: No 00:15:05.994 00:15:05.994 Persistent Memory Region Support 00:15:05.994 ================================ 00:15:05.994 Supported: No 00:15:05.994 00:15:05.994 Admin Command Set Attributes 00:15:05.994 ============================ 00:15:05.994 Security Send/Receive: Not Supported 00:15:05.994 Format NVM: Not Supported 00:15:05.994 Firmware Activate/Download: Not Supported 00:15:05.994 Namespace Management: Not Supported 00:15:05.994 Device Self-Test: Not Supported 00:15:05.994 Directives: Not Supported 00:15:05.994 NVMe-MI: Not Supported 00:15:05.994 Virtualization Management: Not Supported 00:15:05.994 Doorbell Buffer Config: Not Supported 00:15:05.994 Get LBA Status Capability: Not Supported 00:15:05.994 Command & Feature Lockdown Capability: Not Supported 00:15:05.994 Abort Command Limit: 1 00:15:05.994 Async Event Request Limit: 4 00:15:05.994 Number of Firmware Slots: N/A 00:15:05.994 Firmware Slot 1 Read-Only: N/A 00:15:05.994 Firm[2024-07-25 13:57:55.000898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.994 [2024-07-25 13:57:55.000906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.994 [2024-07-25 13:57:55.000909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.994 [2024-07-25 13:57:55.000914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153ef40) on tqpair=0x14fd2c0 00:15:05.994 ware Activation Without Reset: N/A 00:15:05.994 Multiple Update Detection Support: N/A 00:15:05.994 Firmware Update Granularity: No Information Provided 00:15:05.994 Per-Namespace SMART Log: No 00:15:05.994 Asymmetric Namespace Access Log Page: Not Supported 00:15:05.994 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:05.994 Command Effects Log Page: Not Supported 00:15:05.994 Get Log Page Extended Data: Supported 00:15:05.994 Telemetry Log Pages: Not Supported 00:15:05.994 Persistent Event Log Pages: Not Supported 00:15:05.994 Supported Log Pages Log Page: May Support 00:15:05.994 Commands Supported & Effects Log Page: Not Supported 00:15:05.994 Feature Identifiers & Effects Log Page:May Support 00:15:05.994 NVMe-MI Commands & Effects Log Page: May Support 00:15:05.994 Data Area 4 for Telemetry Log: Not Supported 00:15:05.994 Error Log Page Entries Supported: 128 00:15:05.994 Keep Alive: Not Supported 00:15:05.994 00:15:05.994 NVM Command Set Attributes 00:15:05.994 ========================== 00:15:05.994 Submission Queue Entry Size 00:15:05.994 Max: 1 00:15:05.994 Min: 1 00:15:05.994 Completion Queue Entry Size 00:15:05.994 Max: 1 00:15:05.994 Min: 1 00:15:05.994 Number of Namespaces: 0 00:15:05.994 Compare Command: Not Supported 00:15:05.994 Write Uncorrectable Command: Not Supported 00:15:05.994 Dataset Management Command: Not Supported 00:15:05.994 Write Zeroes Command: Not Supported 00:15:05.994 Set Features Save Field: Not Supported 00:15:05.994 Reservations: Not Supported 00:15:05.994 Timestamp: Not Supported 00:15:05.994 Copy: Not Supported 00:15:05.994 Volatile Write Cache: Not Present 00:15:05.994 Atomic Write Unit (Normal): 1 00:15:05.994 Atomic Write Unit (PFail): 1 00:15:05.994 Atomic Compare & Write Unit: 1 00:15:05.994 Fused Compare & Write: Supported 00:15:05.994 Scatter-Gather List 00:15:05.994 SGL Command Set: Supported 00:15:05.994 SGL Keyed: Supported 00:15:05.994 SGL Bit Bucket Descriptor: Not Supported 00:15:05.994 SGL Metadata Pointer: Not Supported 00:15:05.994 Oversized SGL: Not Supported 00:15:05.994 SGL Metadata Address: Not Supported 00:15:05.994 SGL Offset: Supported 00:15:05.994 Transport SGL Data Block: Not Supported 00:15:05.994 Replay Protected Memory Block: Not Supported 00:15:05.994 00:15:05.994 Firmware Slot Information 00:15:05.994 ========================= 00:15:05.994 Active slot: 0 00:15:05.994 00:15:05.994 00:15:05.994 Error Log 00:15:05.994 ========= 00:15:05.994 00:15:05.994 Active Namespaces 00:15:05.994 ================= 00:15:05.994 Discovery Log Page 00:15:05.994 ================== 00:15:05.994 Generation Counter: 2 00:15:05.994 Number of Records: 2 00:15:05.994 Record Format: 0 00:15:05.994 00:15:05.994 Discovery Log Entry 0 00:15:05.994 ---------------------- 00:15:05.994 Transport Type: 3 (TCP) 00:15:05.994 Address Family: 1 (IPv4) 00:15:05.994 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:05.994 Entry Flags: 00:15:05.994 Duplicate Returned Information: 1 00:15:05.994 Explicit Persistent Connection Support for Discovery: 1 00:15:05.994 Transport Requirements: 00:15:05.994 Secure Channel: Not Required 00:15:05.994 Port ID: 0 (0x0000) 00:15:05.994 Controller ID: 65535 (0xffff) 00:15:05.994 Admin Max SQ Size: 128 00:15:05.994 Transport Service Identifier: 4420 00:15:05.994 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:05.994 Transport Address: 10.0.0.2 00:15:05.994 Discovery Log Entry 1 00:15:05.994 ---------------------- 00:15:05.994 Transport Type: 3 (TCP) 00:15:05.994 Address Family: 1 (IPv4) 00:15:05.994 Subsystem Type: 2 (NVM Subsystem) 00:15:05.994 Entry Flags: 00:15:05.994 Duplicate Returned Information: 0 00:15:05.994 Explicit Persistent Connection Support for Discovery: 0 00:15:05.994 Transport Requirements: 00:15:05.994 Secure Channel: Not Required 00:15:05.994 Port ID: 0 (0x0000) 00:15:05.994 Controller ID: 65535 (0xffff) 00:15:05.994 Admin Max SQ Size: 128 00:15:05.994 Transport Service Identifier: 4420 00:15:05.994 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:05.994 Transport Address: 10.0.0.2 [2024-07-25 13:57:55.001020] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:05.994 [2024-07-25 13:57:55.001035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153e940) on tqpair=0x14fd2c0 00:15:05.994 [2024-07-25 13:57:55.001043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.994 [2024-07-25 13:57:55.001049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153eac0) on tqpair=0x14fd2c0 00:15:05.994 [2024-07-25 13:57:55.001054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.994 [2024-07-25 13:57:55.001059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153ec40) on tqpair=0x14fd2c0 00:15:05.994 [2024-07-25 13:57:55.001064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.994 [2024-07-25 13:57:55.001069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.994 [2024-07-25 13:57:55.001074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.994 [2024-07-25 13:57:55.001083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.994 [2024-07-25 13:57:55.001088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.994 [2024-07-25 13:57:55.001092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.994 [2024-07-25 13:57:55.001100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.994 [2024-07-25 13:57:55.001122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.994 [2024-07-25 13:57:55.001173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.994 [2024-07-25 13:57:55.001181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.994 [2024-07-25 13:57:55.001185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.994 [2024-07-25 13:57:55.001189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.994 [2024-07-25 13:57:55.001211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.994 [2024-07-25 13:57:55.001216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.994 [2024-07-25 13:57:55.001220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.994 [2024-07-25 13:57:55.001228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.994 [2024-07-25 13:57:55.001251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.994 [2024-07-25 13:57:55.001338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.994 [2024-07-25 13:57:55.001346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.994 [2024-07-25 13:57:55.001350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.994 [2024-07-25 13:57:55.001354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.994 [2024-07-25 13:57:55.001360] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:05.995 [2024-07-25 13:57:55.001365] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:05.995 [2024-07-25 13:57:55.001376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.001393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.001413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.001461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.001469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.001472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.001488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.001504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.001521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.001566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.001573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.001577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.001592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.001607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.001624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.001672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.001678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.001682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.001697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.001713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.001733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.001778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.001785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.001789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.001803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.001819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.001835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.001883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.001891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.001895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.001909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.001925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.001942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.001984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.001991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.001995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.001999] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.002010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.002014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.002018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.002025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.002042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.002087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.002098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.002103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.002107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.002118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.002122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.002126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.002134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.002151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.002196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.002203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.002207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.002211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.002222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.002226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.002230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.995 [2024-07-25 13:57:55.002237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.995 [2024-07-25 13:57:55.002254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.995 [2024-07-25 13:57:55.006329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.995 [2024-07-25 13:57:55.006358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.995 [2024-07-25 13:57:55.006363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.006369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.995 [2024-07-25 13:57:55.006389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.995 [2024-07-25 13:57:55.006394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.996 [2024-07-25 13:57:55.006398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14fd2c0) 00:15:05.996 [2024-07-25 13:57:55.006409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.996 [2024-07-25 13:57:55.006438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153edc0, cid 3, qid 0 00:15:05.996 [2024-07-25 13:57:55.006495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.996 [2024-07-25 13:57:55.006502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.996 [2024-07-25 13:57:55.006506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.996 [2024-07-25 13:57:55.006510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153edc0) on tqpair=0x14fd2c0 00:15:05.996 [2024-07-25 13:57:55.006519] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:15:06.257 00:15:06.257 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:06.257 [2024-07-25 13:57:55.048691] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:06.257 [2024-07-25 13:57:55.048754] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74116 ] 00:15:06.257 [2024-07-25 13:57:55.188616] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:06.257 [2024-07-25 13:57:55.188709] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:06.257 [2024-07-25 13:57:55.188717] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:06.257 [2024-07-25 13:57:55.188731] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:06.257 [2024-07-25 13:57:55.188742] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:06.257 [2024-07-25 13:57:55.188901] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:06.257 [2024-07-25 13:57:55.188952] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1df12c0 0 00:15:06.257 [2024-07-25 13:57:55.201351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:06.257 [2024-07-25 13:57:55.201384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:06.257 [2024-07-25 13:57:55.201390] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:06.257 [2024-07-25 13:57:55.201394] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:06.257 [2024-07-25 13:57:55.201453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.257 [2024-07-25 13:57:55.201462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.257 [2024-07-25 13:57:55.201466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.257 [2024-07-25 13:57:55.201483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:06.257 [2024-07-25 13:57:55.201529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.257 [2024-07-25 13:57:55.209338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.257 [2024-07-25 13:57:55.209391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.257 [2024-07-25 13:57:55.209398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.257 [2024-07-25 13:57:55.209404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.257 [2024-07-25 13:57:55.209421] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:06.257 [2024-07-25 13:57:55.209432] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:06.257 [2024-07-25 13:57:55.209439] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:06.258 [2024-07-25 13:57:55.209467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.209473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.209477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.258 [2024-07-25 13:57:55.209491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.258 [2024-07-25 13:57:55.209529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.258 [2024-07-25 13:57:55.209601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.258 [2024-07-25 13:57:55.209609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.258 [2024-07-25 13:57:55.209613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.209617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.258 [2024-07-25 13:57:55.209629] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:06.258 [2024-07-25 13:57:55.209637] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:06.258 [2024-07-25 13:57:55.209646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.209650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.209654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.258 [2024-07-25 13:57:55.209662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.258 [2024-07-25 13:57:55.209681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.258 [2024-07-25 13:57:55.210060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.258 [2024-07-25 13:57:55.210075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.258 [2024-07-25 13:57:55.210080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.258 [2024-07-25 13:57:55.210091] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:06.258 [2024-07-25 13:57:55.210101] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:06.258 [2024-07-25 13:57:55.210109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.258 [2024-07-25 13:57:55.210125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.258 [2024-07-25 13:57:55.210144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.258 [2024-07-25 13:57:55.210202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.258 [2024-07-25 13:57:55.210209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.258 [2024-07-25 13:57:55.210213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.258 [2024-07-25 13:57:55.210223] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:06.258 [2024-07-25 13:57:55.210234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.258 [2024-07-25 13:57:55.210250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.258 [2024-07-25 13:57:55.210267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.258 [2024-07-25 13:57:55.210357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.258 [2024-07-25 13:57:55.210366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.258 [2024-07-25 13:57:55.210370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.258 [2024-07-25 13:57:55.210379] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:06.258 [2024-07-25 13:57:55.210385] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:06.258 [2024-07-25 13:57:55.210394] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:06.258 [2024-07-25 13:57:55.210500] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:06.258 [2024-07-25 13:57:55.210505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:06.258 [2024-07-25 13:57:55.210515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.258 [2024-07-25 13:57:55.210531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.258 [2024-07-25 13:57:55.210552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.258 [2024-07-25 13:57:55.210685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.258 [2024-07-25 13:57:55.210693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.258 [2024-07-25 13:57:55.210697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.258 [2024-07-25 13:57:55.210706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:06.258 [2024-07-25 13:57:55.210717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.258 [2024-07-25 13:57:55.210734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.258 [2024-07-25 13:57:55.210750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.258 [2024-07-25 13:57:55.210839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.258 [2024-07-25 13:57:55.210846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.258 [2024-07-25 13:57:55.210850] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.258 [2024-07-25 13:57:55.210859] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:06.258 [2024-07-25 13:57:55.210864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:06.258 [2024-07-25 13:57:55.210873] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:06.258 [2024-07-25 13:57:55.210884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:06.258 [2024-07-25 13:57:55.210898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.210903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.258 [2024-07-25 13:57:55.210911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.258 [2024-07-25 13:57:55.210930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.258 [2024-07-25 13:57:55.211390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:06.258 [2024-07-25 13:57:55.211407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:06.258 [2024-07-25 13:57:55.211412] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.211416] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df12c0): datao=0, datal=4096, cccid=0 00:15:06.258 [2024-07-25 13:57:55.211422] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e32940) on tqpair(0x1df12c0): expected_datao=0, payload_size=4096 00:15:06.258 [2024-07-25 13:57:55.211427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.211436] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.211441] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.211451] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.258 [2024-07-25 13:57:55.211457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.258 [2024-07-25 13:57:55.211461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.258 [2024-07-25 13:57:55.211465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.258 [2024-07-25 13:57:55.211476] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:06.258 [2024-07-25 13:57:55.211482] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:06.259 [2024-07-25 13:57:55.211487] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:06.259 [2024-07-25 13:57:55.211497] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:06.259 [2024-07-25 13:57:55.211503] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:06.259 [2024-07-25 13:57:55.211508] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.211519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.211527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.211544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:06.259 [2024-07-25 13:57:55.211566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.259 [2024-07-25 13:57:55.211616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.259 [2024-07-25 13:57:55.211623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.259 [2024-07-25 13:57:55.211627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.259 [2024-07-25 13:57:55.211639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.211654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.259 [2024-07-25 13:57:55.211661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.211675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.259 [2024-07-25 13:57:55.211682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.211696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.259 [2024-07-25 13:57:55.211702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.211716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.259 [2024-07-25 13:57:55.211721] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.211730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.211738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.211742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.211749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.259 [2024-07-25 13:57:55.211775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32940, cid 0, qid 0 00:15:06.259 [2024-07-25 13:57:55.211783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32ac0, cid 1, qid 0 00:15:06.259 [2024-07-25 13:57:55.211788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32c40, cid 2, qid 0 00:15:06.259 [2024-07-25 13:57:55.211793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32dc0, cid 3, qid 0 00:15:06.259 [2024-07-25 13:57:55.211798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32f40, cid 4, qid 0 00:15:06.259 [2024-07-25 13:57:55.212400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.259 [2024-07-25 13:57:55.212418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.259 [2024-07-25 13:57:55.212423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.212427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32f40) on tqpair=0x1df12c0 00:15:06.259 [2024-07-25 13:57:55.212433] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:06.259 [2024-07-25 13:57:55.212439] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.212449] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.212456] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.212464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.212468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.212472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.212480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:06.259 [2024-07-25 13:57:55.212501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32f40, cid 4, qid 0 00:15:06.259 [2024-07-25 13:57:55.212555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.259 [2024-07-25 13:57:55.212562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.259 [2024-07-25 13:57:55.212565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.212570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32f40) on tqpair=0x1df12c0 00:15:06.259 [2024-07-25 13:57:55.212642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.212654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.212664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.212668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.212676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.259 [2024-07-25 13:57:55.212695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32f40, cid 4, qid 0 00:15:06.259 [2024-07-25 13:57:55.212859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:06.259 [2024-07-25 13:57:55.212866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:06.259 [2024-07-25 13:57:55.212870] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.212874] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df12c0): datao=0, datal=4096, cccid=4 00:15:06.259 [2024-07-25 13:57:55.212879] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e32f40) on tqpair(0x1df12c0): expected_datao=0, payload_size=4096 00:15:06.259 [2024-07-25 13:57:55.212884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.212892] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.212897] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.213137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.259 [2024-07-25 13:57:55.213159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.259 [2024-07-25 13:57:55.213164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.213169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32f40) on tqpair=0x1df12c0 00:15:06.259 [2024-07-25 13:57:55.213181] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:06.259 [2024-07-25 13:57:55.213195] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.213206] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.213215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.213220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df12c0) 00:15:06.259 [2024-07-25 13:57:55.213227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.259 [2024-07-25 13:57:55.213248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32f40, cid 4, qid 0 00:15:06.259 [2024-07-25 13:57:55.217324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:06.259 [2024-07-25 13:57:55.217349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:06.259 [2024-07-25 13:57:55.217354] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.217358] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df12c0): datao=0, datal=4096, cccid=4 00:15:06.259 [2024-07-25 13:57:55.217363] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e32f40) on tqpair(0x1df12c0): expected_datao=0, payload_size=4096 00:15:06.259 [2024-07-25 13:57:55.217369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.217377] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.217382] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.217388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.259 [2024-07-25 13:57:55.217394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.259 [2024-07-25 13:57:55.217398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.217403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32f40) on tqpair=0x1df12c0 00:15:06.259 [2024-07-25 13:57:55.217426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.217443] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:06.259 [2024-07-25 13:57:55.217456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.259 [2024-07-25 13:57:55.217461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.217470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.217499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32f40, cid 4, qid 0 00:15:06.260 [2024-07-25 13:57:55.217586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:06.260 [2024-07-25 13:57:55.217594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:06.260 [2024-07-25 13:57:55.217598] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.217602] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df12c0): datao=0, datal=4096, cccid=4 00:15:06.260 [2024-07-25 13:57:55.217606] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e32f40) on tqpair(0x1df12c0): expected_datao=0, payload_size=4096 00:15:06.260 [2024-07-25 13:57:55.217611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.217619] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.217623] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.217631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.260 [2024-07-25 13:57:55.217638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.260 [2024-07-25 13:57:55.217642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.217646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32f40) on tqpair=0x1df12c0 00:15:06.260 [2024-07-25 13:57:55.217655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:06.260 [2024-07-25 13:57:55.217665] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:06.260 [2024-07-25 13:57:55.217677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:06.260 [2024-07-25 13:57:55.217684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:06.260 [2024-07-25 13:57:55.217689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:06.260 [2024-07-25 13:57:55.217695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:06.260 [2024-07-25 13:57:55.217701] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:06.260 [2024-07-25 13:57:55.217706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:06.260 [2024-07-25 13:57:55.217712] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:06.260 [2024-07-25 13:57:55.217733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.217738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.217746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.217754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.217758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.217762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.217768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.260 [2024-07-25 13:57:55.217794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32f40, cid 4, qid 0 00:15:06.260 [2024-07-25 13:57:55.217801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e330c0, cid 5, qid 0 00:15:06.260 [2024-07-25 13:57:55.218364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.260 [2024-07-25 13:57:55.218382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.260 [2024-07-25 13:57:55.218387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.218395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32f40) on tqpair=0x1df12c0 00:15:06.260 [2024-07-25 13:57:55.218402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.260 [2024-07-25 13:57:55.218408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.260 [2024-07-25 13:57:55.218412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.218416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e330c0) on tqpair=0x1df12c0 00:15:06.260 [2024-07-25 13:57:55.218428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.218433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.218440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.218461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e330c0, cid 5, qid 0 00:15:06.260 [2024-07-25 13:57:55.218514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.260 [2024-07-25 13:57:55.218521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.260 [2024-07-25 13:57:55.218525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.218529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e330c0) on tqpair=0x1df12c0 00:15:06.260 [2024-07-25 13:57:55.218540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.218545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.218552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.218569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e330c0, cid 5, qid 0 00:15:06.260 [2024-07-25 13:57:55.218873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.260 [2024-07-25 13:57:55.218888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.260 [2024-07-25 13:57:55.218893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.218897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e330c0) on tqpair=0x1df12c0 00:15:06.260 [2024-07-25 13:57:55.218908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.218920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.218927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.218945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e330c0, cid 5, qid 0 00:15:06.260 [2024-07-25 13:57:55.219004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.260 [2024-07-25 13:57:55.219011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.260 [2024-07-25 13:57:55.219015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e330c0) on tqpair=0x1df12c0 00:15:06.260 [2024-07-25 13:57:55.219044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.219058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.219066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.219076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.219085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.219096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.219104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1df12c0) 00:15:06.260 [2024-07-25 13:57:55.219115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.260 [2024-07-25 13:57:55.219135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e330c0, cid 5, qid 0 00:15:06.260 [2024-07-25 13:57:55.219143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32f40, cid 4, qid 0 00:15:06.260 [2024-07-25 13:57:55.219147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e33240, cid 6, qid 0 00:15:06.260 [2024-07-25 13:57:55.219152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e333c0, cid 7, qid 0 00:15:06.260 [2024-07-25 13:57:55.219803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:06.260 [2024-07-25 13:57:55.219819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:06.260 [2024-07-25 13:57:55.219823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219828] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df12c0): datao=0, datal=8192, cccid=5 00:15:06.260 [2024-07-25 13:57:55.219833] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e330c0) on tqpair(0x1df12c0): expected_datao=0, payload_size=8192 00:15:06.260 [2024-07-25 13:57:55.219838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219857] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219862] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:06.260 [2024-07-25 13:57:55.219875] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:06.260 [2024-07-25 13:57:55.219879] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219883] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df12c0): datao=0, datal=512, cccid=4 00:15:06.260 [2024-07-25 13:57:55.219888] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e32f40) on tqpair(0x1df12c0): expected_datao=0, payload_size=512 00:15:06.260 [2024-07-25 13:57:55.219892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219899] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219903] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:06.260 [2024-07-25 13:57:55.219909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:06.260 [2024-07-25 13:57:55.219915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:06.260 [2024-07-25 13:57:55.219918] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.219922] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df12c0): datao=0, datal=512, cccid=6 00:15:06.261 [2024-07-25 13:57:55.219927] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e33240) on tqpair(0x1df12c0): expected_datao=0, payload_size=512 00:15:06.261 [2024-07-25 13:57:55.219931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.219938] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.219942] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.219947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:06.261 [2024-07-25 13:57:55.219953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:06.261 [2024-07-25 13:57:55.219957] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.219961] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df12c0): datao=0, datal=4096, cccid=7 00:15:06.261 [2024-07-25 13:57:55.219966] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e333c0) on tqpair(0x1df12c0): expected_datao=0, payload_size=4096 00:15:06.261 [2024-07-25 13:57:55.219970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.219977] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.219981] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.219987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.261 [2024-07-25 13:57:55.219993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.261 [2024-07-25 13:57:55.219997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.220001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e330c0) on tqpair=0x1df12c0 00:15:06.261 [2024-07-25 13:57:55.220022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.261 [2024-07-25 13:57:55.220029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.261 ===================================================== 00:15:06.261 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.261 ===================================================== 00:15:06.261 Controller Capabilities/Features 00:15:06.261 ================================ 00:15:06.261 Vendor ID: 8086 00:15:06.261 Subsystem Vendor ID: 8086 00:15:06.261 Serial Number: SPDK00000000000001 00:15:06.261 Model Number: SPDK bdev Controller 00:15:06.261 Firmware Version: 24.09 00:15:06.261 Recommended Arb Burst: 6 00:15:06.261 IEEE OUI Identifier: e4 d2 5c 00:15:06.261 Multi-path I/O 00:15:06.261 May have multiple subsystem ports: Yes 00:15:06.261 May have multiple controllers: Yes 00:15:06.261 Associated with SR-IOV VF: No 00:15:06.261 Max Data Transfer Size: 131072 00:15:06.261 Max Number of Namespaces: 32 00:15:06.261 Max Number of I/O Queues: 127 00:15:06.261 NVMe Specification Version (VS): 1.3 00:15:06.261 NVMe Specification Version (Identify): 1.3 00:15:06.261 Maximum Queue Entries: 128 00:15:06.261 Contiguous Queues Required: Yes 00:15:06.261 Arbitration Mechanisms Supported 00:15:06.261 Weighted Round Robin: Not Supported 00:15:06.261 Vendor Specific: Not Supported 00:15:06.261 Reset Timeout: 15000 ms 00:15:06.261 Doorbell Stride: 4 bytes 00:15:06.261 NVM Subsystem Reset: Not Supported 00:15:06.261 Command Sets Supported 00:15:06.261 NVM Command Set: Supported 00:15:06.261 Boot Partition: Not Supported 00:15:06.261 Memory Page Size Minimum: 4096 bytes 00:15:06.261 Memory Page Size Maximum: 4096 bytes 00:15:06.261 Persistent Memory Region: Not Supported 00:15:06.261 Optional Asynchronous Events Supported 00:15:06.261 Namespace Attribute Notices: Supported 00:15:06.261 Firmware Activation Notices: Not Supported 00:15:06.261 ANA Change Notices: Not Supported 00:15:06.261 PLE Aggregate Log Change Notices: Not Supported 00:15:06.261 LBA Status Info Alert Notices: Not Supported 00:15:06.261 EGE Aggregate Log Change Notices: Not Supported 00:15:06.261 Normal NVM Subsystem Shutdown event: Not Supported 00:15:06.261 Zone Descriptor Change Notices: Not Supported 00:15:06.261 Discovery Log Change Notices: Not Supported 00:15:06.261 Controller Attributes 00:15:06.261 128-bit Host Identifier: Supported 00:15:06.261 Non-Operational Permissive Mode: Not Supported 00:15:06.261 NVM Sets: Not Supported 00:15:06.261 Read Recovery Levels: Not Supported 00:15:06.261 Endurance Groups: Not Supported 00:15:06.261 Predictable Latency Mode: Not Supported 00:15:06.261 Traffic Based Keep ALive: Not Supported 00:15:06.261 Namespace Granularity: Not Supported 00:15:06.261 SQ Associations: Not Supported 00:15:06.261 UUID List: Not Supported 00:15:06.261 Multi-Domain Subsystem: Not Supported 00:15:06.261 Fixed Capacity Management: Not Supported 00:15:06.261 Variable Capacity Management: Not Supported 00:15:06.261 Delete Endurance Group: Not Supported 00:15:06.261 Delete NVM Set: Not Supported 00:15:06.261 Extended LBA Formats Supported: Not Supported 00:15:06.261 Flexible Data Placement Supported: Not Supported 00:15:06.261 00:15:06.261 Controller Memory Buffer Support 00:15:06.261 ================================ 00:15:06.261 Supported: No 00:15:06.261 00:15:06.261 Persistent Memory Region Support 00:15:06.261 ================================ 00:15:06.261 Supported: No 00:15:06.261 00:15:06.261 Admin Command Set Attributes 00:15:06.261 ============================ 00:15:06.261 Security Send/Receive: Not Supported 00:15:06.261 Format NVM: Not Supported 00:15:06.261 Firmware Activate/Download: Not Supported 00:15:06.261 Namespace Management: Not Supported 00:15:06.261 Device Self-Test: Not Supported 00:15:06.261 Directives: Not Supported 00:15:06.261 NVMe-MI: Not Supported 00:15:06.261 Virtualization Management: Not Supported 00:15:06.261 Doorbell Buffer Config: Not Supported 00:15:06.261 Get LBA Status Capability: Not Supported 00:15:06.261 Command & Feature Lockdown Capability: Not Supported 00:15:06.261 Abort Command Limit: 4 00:15:06.261 Async Event Request Limit: 4 00:15:06.261 Number of Firmware Slots: N/A 00:15:06.261 Firmware Slot 1 Read-Only: N/A 00:15:06.261 Firmware Activation Without Reset: [2024-07-25 13:57:55.220033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.220037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32f40) on tqpair=0x1df12c0 00:15:06.261 [2024-07-25 13:57:55.220051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.261 [2024-07-25 13:57:55.220057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.261 [2024-07-25 13:57:55.220061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.220065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e33240) on tqpair=0x1df12c0 00:15:06.261 [2024-07-25 13:57:55.220073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.261 [2024-07-25 13:57:55.220079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.261 [2024-07-25 13:57:55.220083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.261 [2024-07-25 13:57:55.220097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e333c0) on tqpair=0x1df12c0 00:15:06.261 N/A 00:15:06.261 Multiple Update Detection Support: N/A 00:15:06.261 Firmware Update Granularity: No Information Provided 00:15:06.261 Per-Namespace SMART Log: No 00:15:06.261 Asymmetric Namespace Access Log Page: Not Supported 00:15:06.261 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:06.261 Command Effects Log Page: Supported 00:15:06.261 Get Log Page Extended Data: Supported 00:15:06.261 Telemetry Log Pages: Not Supported 00:15:06.261 Persistent Event Log Pages: Not Supported 00:15:06.261 Supported Log Pages Log Page: May Support 00:15:06.261 Commands Supported & Effects Log Page: Not Supported 00:15:06.261 Feature Identifiers & Effects Log Page:May Support 00:15:06.261 NVMe-MI Commands & Effects Log Page: May Support 00:15:06.261 Data Area 4 for Telemetry Log: Not Supported 00:15:06.261 Error Log Page Entries Supported: 128 00:15:06.261 Keep Alive: Supported 00:15:06.261 Keep Alive Granularity: 10000 ms 00:15:06.261 00:15:06.261 NVM Command Set Attributes 00:15:06.261 ========================== 00:15:06.261 Submission Queue Entry Size 00:15:06.261 Max: 64 00:15:06.261 Min: 64 00:15:06.261 Completion Queue Entry Size 00:15:06.261 Max: 16 00:15:06.261 Min: 16 00:15:06.261 Number of Namespaces: 32 00:15:06.261 Compare Command: Supported 00:15:06.261 Write Uncorrectable Command: Not Supported 00:15:06.261 Dataset Management Command: Supported 00:15:06.261 Write Zeroes Command: Supported 00:15:06.261 Set Features Save Field: Not Supported 00:15:06.261 Reservations: Supported 00:15:06.261 Timestamp: Not Supported 00:15:06.261 Copy: Supported 00:15:06.261 Volatile Write Cache: Present 00:15:06.261 Atomic Write Unit (Normal): 1 00:15:06.261 Atomic Write Unit (PFail): 1 00:15:06.261 Atomic Compare & Write Unit: 1 00:15:06.261 Fused Compare & Write: Supported 00:15:06.261 Scatter-Gather List 00:15:06.261 SGL Command Set: Supported 00:15:06.261 SGL Keyed: Supported 00:15:06.261 SGL Bit Bucket Descriptor: Not Supported 00:15:06.261 SGL Metadata Pointer: Not Supported 00:15:06.261 Oversized SGL: Not Supported 00:15:06.261 SGL Metadata Address: Not Supported 00:15:06.261 SGL Offset: Supported 00:15:06.261 Transport SGL Data Block: Not Supported 00:15:06.261 Replay Protected Memory Block: Not Supported 00:15:06.262 00:15:06.262 Firmware Slot Information 00:15:06.262 ========================= 00:15:06.262 Active slot: 1 00:15:06.262 Slot 1 Firmware Revision: 24.09 00:15:06.262 00:15:06.262 00:15:06.262 Commands Supported and Effects 00:15:06.262 ============================== 00:15:06.262 Admin Commands 00:15:06.262 -------------- 00:15:06.262 Get Log Page (02h): Supported 00:15:06.262 Identify (06h): Supported 00:15:06.262 Abort (08h): Supported 00:15:06.262 Set Features (09h): Supported 00:15:06.262 Get Features (0Ah): Supported 00:15:06.262 Asynchronous Event Request (0Ch): Supported 00:15:06.262 Keep Alive (18h): Supported 00:15:06.262 I/O Commands 00:15:06.262 ------------ 00:15:06.262 Flush (00h): Supported LBA-Change 00:15:06.262 Write (01h): Supported LBA-Change 00:15:06.262 Read (02h): Supported 00:15:06.262 Compare (05h): Supported 00:15:06.262 Write Zeroes (08h): Supported LBA-Change 00:15:06.262 Dataset Management (09h): Supported LBA-Change 00:15:06.262 Copy (19h): Supported LBA-Change 00:15:06.262 00:15:06.262 Error Log 00:15:06.262 ========= 00:15:06.262 00:15:06.262 Arbitration 00:15:06.262 =========== 00:15:06.262 Arbitration Burst: 1 00:15:06.262 00:15:06.262 Power Management 00:15:06.262 ================ 00:15:06.262 Number of Power States: 1 00:15:06.262 Current Power State: Power State #0 00:15:06.262 Power State #0: 00:15:06.262 Max Power: 0.00 W 00:15:06.262 Non-Operational State: Operational 00:15:06.262 Entry Latency: Not Reported 00:15:06.262 Exit Latency: Not Reported 00:15:06.262 Relative Read Throughput: 0 00:15:06.262 Relative Read Latency: 0 00:15:06.262 Relative Write Throughput: 0 00:15:06.262 Relative Write Latency: 0 00:15:06.262 Idle Power: Not Reported 00:15:06.262 Active Power: Not Reported 00:15:06.262 Non-Operational Permissive Mode: Not Supported 00:15:06.262 00:15:06.262 Health Information 00:15:06.262 ================== 00:15:06.262 Critical Warnings: 00:15:06.262 Available Spare Space: OK 00:15:06.262 Temperature: OK 00:15:06.262 Device Reliability: OK 00:15:06.262 Read Only: No 00:15:06.262 Volatile Memory Backup: OK 00:15:06.262 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:06.262 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:06.262 Available Spare: 0% 00:15:06.262 Available Spare Threshold: 0% 00:15:06.262 Life Percentage Used:[2024-07-25 13:57:55.220213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.220220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1df12c0) 00:15:06.262 [2024-07-25 13:57:55.220229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.262 [2024-07-25 13:57:55.220253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e333c0, cid 7, qid 0 00:15:06.262 [2024-07-25 13:57:55.220487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.262 [2024-07-25 13:57:55.220497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.262 [2024-07-25 13:57:55.220501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.220505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e333c0) on tqpair=0x1df12c0 00:15:06.262 [2024-07-25 13:57:55.220551] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:06.262 [2024-07-25 13:57:55.220564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32940) on tqpair=0x1df12c0 00:15:06.262 [2024-07-25 13:57:55.220572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.262 [2024-07-25 13:57:55.220578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32ac0) on tqpair=0x1df12c0 00:15:06.262 [2024-07-25 13:57:55.220583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.262 [2024-07-25 13:57:55.220588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32c40) on tqpair=0x1df12c0 00:15:06.262 [2024-07-25 13:57:55.220593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.262 [2024-07-25 13:57:55.220599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32dc0) on tqpair=0x1df12c0 00:15:06.262 [2024-07-25 13:57:55.220604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.262 [2024-07-25 13:57:55.220615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.220619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.220623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df12c0) 00:15:06.262 [2024-07-25 13:57:55.220637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.262 [2024-07-25 13:57:55.220660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32dc0, cid 3, qid 0 00:15:06.262 [2024-07-25 13:57:55.221107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.262 [2024-07-25 13:57:55.221123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.262 [2024-07-25 13:57:55.221128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.221132] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32dc0) on tqpair=0x1df12c0 00:15:06.262 [2024-07-25 13:57:55.221141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.221145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.221149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df12c0) 00:15:06.262 [2024-07-25 13:57:55.221157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.262 [2024-07-25 13:57:55.221181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32dc0, cid 3, qid 0 00:15:06.262 [2024-07-25 13:57:55.221260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.262 [2024-07-25 13:57:55.221267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.262 [2024-07-25 13:57:55.221271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.221275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32dc0) on tqpair=0x1df12c0 00:15:06.262 [2024-07-25 13:57:55.221281] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:06.262 [2024-07-25 13:57:55.221286] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:06.262 [2024-07-25 13:57:55.221296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.225328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.225337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df12c0) 00:15:06.262 [2024-07-25 13:57:55.225350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.262 [2024-07-25 13:57:55.225381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e32dc0, cid 3, qid 0 00:15:06.262 [2024-07-25 13:57:55.225448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:06.262 [2024-07-25 13:57:55.225456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:06.262 [2024-07-25 13:57:55.225460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:06.262 [2024-07-25 13:57:55.225465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e32dc0) on tqpair=0x1df12c0 00:15:06.262 [2024-07-25 13:57:55.225476] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:15:06.262 0% 00:15:06.262 Data Units Read: 0 00:15:06.262 Data Units Written: 0 00:15:06.262 Host Read Commands: 0 00:15:06.262 Host Write Commands: 0 00:15:06.262 Controller Busy Time: 0 minutes 00:15:06.262 Power Cycles: 0 00:15:06.262 Power On Hours: 0 hours 00:15:06.262 Unsafe Shutdowns: 0 00:15:06.262 Unrecoverable Media Errors: 0 00:15:06.262 Lifetime Error Log Entries: 0 00:15:06.262 Warning Temperature Time: 0 minutes 00:15:06.262 Critical Temperature Time: 0 minutes 00:15:06.262 00:15:06.262 Number of Queues 00:15:06.263 ================ 00:15:06.263 Number of I/O Submission Queues: 127 00:15:06.263 Number of I/O Completion Queues: 127 00:15:06.263 00:15:06.263 Active Namespaces 00:15:06.263 ================= 00:15:06.263 Namespace ID:1 00:15:06.263 Error Recovery Timeout: Unlimited 00:15:06.263 Command Set Identifier: NVM (00h) 00:15:06.263 Deallocate: Supported 00:15:06.263 Deallocated/Unwritten Error: Not Supported 00:15:06.263 Deallocated Read Value: Unknown 00:15:06.263 Deallocate in Write Zeroes: Not Supported 00:15:06.263 Deallocated Guard Field: 0xFFFF 00:15:06.263 Flush: Supported 00:15:06.263 Reservation: Supported 00:15:06.263 Namespace Sharing Capabilities: Multiple Controllers 00:15:06.263 Size (in LBAs): 131072 (0GiB) 00:15:06.263 Capacity (in LBAs): 131072 (0GiB) 00:15:06.263 Utilization (in LBAs): 131072 (0GiB) 00:15:06.263 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:06.263 EUI64: ABCDEF0123456789 00:15:06.263 UUID: c2237bdd-c7b8-4199-8583-f8fc36bd28cf 00:15:06.263 Thin Provisioning: Not Supported 00:15:06.263 Per-NS Atomic Units: Yes 00:15:06.263 Atomic Boundary Size (Normal): 0 00:15:06.263 Atomic Boundary Size (PFail): 0 00:15:06.263 Atomic Boundary Offset: 0 00:15:06.263 Maximum Single Source Range Length: 65535 00:15:06.263 Maximum Copy Length: 65535 00:15:06.263 Maximum Source Range Count: 1 00:15:06.263 NGUID/EUI64 Never Reused: No 00:15:06.263 Namespace Write Protected: No 00:15:06.263 Number of LBA Formats: 1 00:15:06.263 Current LBA Format: LBA Format #00 00:15:06.263 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:06.263 00:15:06.263 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.521 rmmod nvme_tcp 00:15:06.521 rmmod nvme_fabrics 00:15:06.521 rmmod nvme_keyring 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74073 ']' 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74073 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74073 ']' 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74073 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74073 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.521 killing process with pid 74073 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74073' 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74073 00:15:06.521 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74073 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:06.780 00:15:06.780 real 0m2.543s 00:15:06.780 user 0m6.907s 00:15:06.780 sys 0m0.647s 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:06.780 ************************************ 00:15:06.780 END TEST nvmf_identify 00:15:06.780 ************************************ 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.780 13:57:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:06.780 ************************************ 00:15:06.780 START TEST nvmf_perf 00:15:06.781 ************************************ 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:06.781 * Looking for test storage... 00:15:06.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.781 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:07.041 Cannot find device "nvmf_tgt_br" 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.041 Cannot find device "nvmf_tgt_br2" 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:07.041 Cannot find device "nvmf_tgt_br" 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:07.041 Cannot find device "nvmf_tgt_br2" 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:07.041 13:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:07.041 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:07.041 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:07.041 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:07.041 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:07.041 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:07.041 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:07.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:07.300 00:15:07.300 --- 10.0.0.2 ping statistics --- 00:15:07.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.300 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:07.300 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.300 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:07.300 00:15:07.300 --- 10.0.0.3 ping statistics --- 00:15:07.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.300 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:07.300 00:15:07.300 --- 10.0.0.1 ping statistics --- 00:15:07.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.300 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74281 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74281 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74281 ']' 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.300 13:57:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:07.300 [2024-07-25 13:57:56.239582] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:07.300 [2024-07-25 13:57:56.239677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.566 [2024-07-25 13:57:56.375688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:07.566 [2024-07-25 13:57:56.494878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.566 [2024-07-25 13:57:56.494944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.566 [2024-07-25 13:57:56.494956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.566 [2024-07-25 13:57:56.494965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.566 [2024-07-25 13:57:56.494973] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.566 [2024-07-25 13:57:56.495102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.566 [2024-07-25 13:57:56.495255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.566 [2024-07-25 13:57:56.495342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.566 [2024-07-25 13:57:56.495347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.566 [2024-07-25 13:57:56.550113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:08.500 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.500 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:15:08.500 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:08.500 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:08.500 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:08.500 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.500 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:08.500 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:08.757 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:08.757 13:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:09.015 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:09.273 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:09.531 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:09.531 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:09.531 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:09.531 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:09.531 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:09.789 [2024-07-25 13:57:58.604337] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.789 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.047 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:10.047 13:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.305 13:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:10.305 13:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:10.563 13:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.563 [2024-07-25 13:57:59.589518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.822 13:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.822 13:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:10.822 13:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:10.822 13:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:10.822 13:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:12.198 Initializing NVMe Controllers 00:15:12.198 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:12.198 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:12.198 Initialization complete. Launching workers. 00:15:12.198 ======================================================== 00:15:12.198 Latency(us) 00:15:12.198 Device Information : IOPS MiB/s Average min max 00:15:12.198 PCIE (0000:00:10.0) NSID 1 from core 0: 24190.74 94.50 1325.75 359.76 7633.15 00:15:12.198 ======================================================== 00:15:12.198 Total : 24190.74 94.50 1325.75 359.76 7633.15 00:15:12.198 00:15:12.198 13:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:13.574 Initializing NVMe Controllers 00:15:13.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:13.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:13.574 Initialization complete. Launching workers. 00:15:13.574 ======================================================== 00:15:13.574 Latency(us) 00:15:13.574 Device Information : IOPS MiB/s Average min max 00:15:13.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3590.00 14.02 278.21 107.01 4262.62 00:15:13.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8127.80 7942.14 12028.01 00:15:13.574 ======================================================== 00:15:13.574 Total : 3714.00 14.51 540.29 107.01 12028.01 00:15:13.574 00:15:13.574 13:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:14.951 Initializing NVMe Controllers 00:15:14.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:14.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:14.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:14.951 Initialization complete. Launching workers. 00:15:14.951 ======================================================== 00:15:14.951 Latency(us) 00:15:14.951 Device Information : IOPS MiB/s Average min max 00:15:14.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8650.19 33.79 3699.60 834.17 11093.90 00:15:14.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3982.28 15.56 8086.63 6609.96 15201.04 00:15:14.951 ======================================================== 00:15:14.951 Total : 12632.47 49.35 5082.58 834.17 15201.04 00:15:14.951 00:15:14.951 13:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:14.951 13:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:17.482 Initializing NVMe Controllers 00:15:17.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:17.482 Controller IO queue size 128, less than required. 00:15:17.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.482 Controller IO queue size 128, less than required. 00:15:17.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:17.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:17.482 Initialization complete. Launching workers. 00:15:17.482 ======================================================== 00:15:17.482 Latency(us) 00:15:17.482 Device Information : IOPS MiB/s Average min max 00:15:17.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1596.50 399.12 81521.37 41974.05 129666.06 00:15:17.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 639.00 159.75 213260.08 55650.52 332564.47 00:15:17.482 ======================================================== 00:15:17.482 Total : 2235.50 558.88 119177.84 41974.05 332564.47 00:15:17.482 00:15:17.482 13:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:17.482 Initializing NVMe Controllers 00:15:17.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:17.482 Controller IO queue size 128, less than required. 00:15:17.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.482 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:17.482 Controller IO queue size 128, less than required. 00:15:17.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.482 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:17.482 WARNING: Some requested NVMe devices were skipped 00:15:17.482 No valid NVMe controllers or AIO or URING devices found 00:15:17.482 13:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:20.011 Initializing NVMe Controllers 00:15:20.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:20.011 Controller IO queue size 128, less than required. 00:15:20.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:20.011 Controller IO queue size 128, less than required. 00:15:20.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:20.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:20.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:20.011 Initialization complete. Launching workers. 00:15:20.011 00:15:20.011 ==================== 00:15:20.011 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:20.011 TCP transport: 00:15:20.011 polls: 9216 00:15:20.011 idle_polls: 5045 00:15:20.011 sock_completions: 4171 00:15:20.011 nvme_completions: 6371 00:15:20.011 submitted_requests: 9578 00:15:20.011 queued_requests: 1 00:15:20.011 00:15:20.011 ==================== 00:15:20.011 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:20.011 TCP transport: 00:15:20.011 polls: 11647 00:15:20.011 idle_polls: 7672 00:15:20.011 sock_completions: 3975 00:15:20.011 nvme_completions: 6169 00:15:20.011 submitted_requests: 9216 00:15:20.011 queued_requests: 1 00:15:20.011 ======================================================== 00:15:20.011 Latency(us) 00:15:20.011 Device Information : IOPS MiB/s Average min max 00:15:20.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1588.26 397.07 82446.21 39795.51 154948.98 00:15:20.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1537.90 384.47 84513.85 35732.62 135154.39 00:15:20.011 ======================================================== 00:15:20.011 Total : 3126.16 781.54 83463.37 35732.62 154948.98 00:15:20.011 00:15:20.269 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:20.269 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.537 rmmod nvme_tcp 00:15:20.537 rmmod nvme_fabrics 00:15:20.537 rmmod nvme_keyring 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74281 ']' 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74281 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74281 ']' 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74281 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74281 00:15:20.537 killing process with pid 74281 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74281' 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74281 00:15:20.537 13:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74281 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:21.478 00:15:21.478 real 0m14.543s 00:15:21.478 user 0m53.092s 00:15:21.478 sys 0m4.141s 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:21.478 ************************************ 00:15:21.478 END TEST nvmf_perf 00:15:21.478 ************************************ 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.478 ************************************ 00:15:21.478 START TEST nvmf_fio_host 00:15:21.478 ************************************ 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:21.478 * Looking for test storage... 00:15:21.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.478 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:21.479 Cannot find device "nvmf_tgt_br" 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:21.479 Cannot find device "nvmf_tgt_br2" 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:21.479 Cannot find device "nvmf_tgt_br" 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:21.479 Cannot find device "nvmf_tgt_br2" 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:21.479 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:21.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:21.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:21.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:15:21.740 00:15:21.740 --- 10.0.0.2 ping statistics --- 00:15:21.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.740 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:21.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:21.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:15:21.740 00:15:21.740 --- 10.0.0.3 ping statistics --- 00:15:21.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.740 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:21.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:21.740 00:15:21.740 --- 10.0.0.1 ping statistics --- 00:15:21.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.740 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.740 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74688 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74688 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74688 ']' 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.003 13:58:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.003 [2024-07-25 13:58:10.828105] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:22.003 [2024-07-25 13:58:10.828555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.003 [2024-07-25 13:58:10.972366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.265 [2024-07-25 13:58:11.104795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.266 [2024-07-25 13:58:11.105148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.266 [2024-07-25 13:58:11.105354] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.266 [2024-07-25 13:58:11.105592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.266 [2024-07-25 13:58:11.105608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.266 [2024-07-25 13:58:11.105708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.266 [2024-07-25 13:58:11.106245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.266 [2024-07-25 13:58:11.106402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.266 [2024-07-25 13:58:11.106411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.266 [2024-07-25 13:58:11.163854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:22.846 13:58:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.846 13:58:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:15:22.846 13:58:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:23.113 [2024-07-25 13:58:12.019201] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.114 13:58:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:23.114 13:58:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:23.114 13:58:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.114 13:58:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:23.382 Malloc1 00:15:23.382 13:58:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.643 13:58:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:23.902 13:58:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.160 [2024-07-25 13:58:13.162896] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.160 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:24.426 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.703 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:24.703 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:24.703 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:24.703 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:24.703 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:24.703 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:24.703 13:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:24.703 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:24.703 fio-3.35 00:15:24.703 Starting 1 thread 00:15:27.235 00:15:27.235 test: (groupid=0, jobs=1): err= 0: pid=74766: Thu Jul 25 13:58:15 2024 00:15:27.235 read: IOPS=8876, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2007msec) 00:15:27.235 slat (usec): min=2, max=218, avg= 2.64, stdev= 2.25 00:15:27.235 clat (usec): min=1584, max=13333, avg=7492.27, stdev=511.42 00:15:27.235 lat (usec): min=1620, max=13336, avg=7494.91, stdev=511.20 00:15:27.235 clat percentiles (usec): 00:15:27.235 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7111], 00:15:27.235 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7570], 00:15:27.235 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:15:27.235 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[11338], 99.95th=[12387], 00:15:27.235 | 99.99th=[13304] 00:15:27.235 bw ( KiB/s): min=34552, max=35968, per=99.96%, avg=35490.00, stdev=640.98, samples=4 00:15:27.235 iops : min= 8638, max= 8992, avg=8872.50, stdev=160.24, samples=4 00:15:27.235 write: IOPS=8887, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2007msec); 0 zone resets 00:15:27.235 slat (usec): min=2, max=150, avg= 2.75, stdev= 1.57 00:15:27.235 clat (usec): min=1488, max=13445, avg=6857.66, stdev=483.42 00:15:27.235 lat (usec): min=1496, max=13448, avg=6860.41, stdev=483.39 00:15:27.235 clat percentiles (usec): 00:15:27.235 | 1.00th=[ 5932], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6521], 00:15:27.235 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6849], 60.00th=[ 6980], 00:15:27.235 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7308], 95.00th=[ 7504], 00:15:27.235 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[11338], 99.95th=[12387], 00:15:27.235 | 99.99th=[13435] 00:15:27.235 bw ( KiB/s): min=35224, max=35944, per=100.00%, avg=35560.00, stdev=337.39, samples=4 00:15:27.235 iops : min= 8806, max= 8986, avg=8890.00, stdev=84.35, samples=4 00:15:27.235 lat (msec) : 2=0.04%, 4=0.12%, 10=99.67%, 20=0.17% 00:15:27.235 cpu : usr=66.05%, sys=25.37%, ctx=6, majf=0, minf=7 00:15:27.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:27.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:27.235 issued rwts: total=17815,17837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.235 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:27.235 00:15:27.235 Run status group 0 (all jobs): 00:15:27.235 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2007-2007msec 00:15:27.235 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2007-2007msec 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:27.235 13:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:27.235 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:27.235 fio-3.35 00:15:27.235 Starting 1 thread 00:15:29.765 00:15:29.765 test: (groupid=0, jobs=1): err= 0: pid=74809: Thu Jul 25 13:58:18 2024 00:15:29.765 read: IOPS=7758, BW=121MiB/s (127MB/s)(243MiB/2007msec) 00:15:29.765 slat (usec): min=3, max=119, avg= 3.86, stdev= 1.77 00:15:29.765 clat (usec): min=1947, max=20991, avg=9304.89, stdev=2875.81 00:15:29.765 lat (usec): min=1950, max=20995, avg=9308.75, stdev=2875.86 00:15:29.765 clat percentiles (usec): 00:15:29.765 | 1.00th=[ 4359], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6718], 00:15:29.765 | 30.00th=[ 7570], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9765], 00:15:29.765 | 70.00th=[10683], 80.00th=[11469], 90.00th=[13304], 95.00th=[14746], 00:15:29.765 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:15:29.765 | 99.99th=[20841] 00:15:29.765 bw ( KiB/s): min=53408, max=69792, per=50.51%, avg=62704.00, stdev=7370.68, samples=4 00:15:29.765 iops : min= 3338, max= 4362, avg=3919.00, stdev=460.67, samples=4 00:15:29.765 write: IOPS=4485, BW=70.1MiB/s (73.5MB/s)(128MiB/1831msec); 0 zone resets 00:15:29.765 slat (usec): min=36, max=162, avg=38.60, stdev= 4.26 00:15:29.765 clat (usec): min=5243, max=22503, avg=12739.43, stdev=2445.29 00:15:29.765 lat (usec): min=5281, max=22540, avg=12778.03, stdev=2445.51 00:15:29.765 clat percentiles (usec): 00:15:29.765 | 1.00th=[ 8094], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10814], 00:15:29.765 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12256], 60.00th=[12911], 00:15:29.765 | 70.00th=[13698], 80.00th=[14877], 90.00th=[16450], 95.00th=[17171], 00:15:29.765 | 99.00th=[18482], 99.50th=[19792], 99.90th=[22152], 99.95th=[22152], 00:15:29.765 | 99.99th=[22414] 00:15:29.765 bw ( KiB/s): min=55360, max=73568, per=90.74%, avg=65120.00, stdev=8138.87, samples=4 00:15:29.765 iops : min= 3460, max= 4598, avg=4070.00, stdev=508.68, samples=4 00:15:29.765 lat (msec) : 2=0.01%, 4=0.22%, 10=43.94%, 20=55.66%, 50=0.18% 00:15:29.765 cpu : usr=77.72%, sys=17.60%, ctx=4, majf=0, minf=14 00:15:29.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:29.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.765 issued rwts: total=15572,8213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.765 00:15:29.765 Run status group 0 (all jobs): 00:15:29.765 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=243MiB (255MB), run=2007-2007msec 00:15:29.765 WRITE: bw=70.1MiB/s (73.5MB/s), 70.1MiB/s-70.1MiB/s (73.5MB/s-73.5MB/s), io=128MiB (135MB), run=1831-1831msec 00:15:29.765 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.765 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:29.765 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:29.765 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:29.765 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.766 rmmod nvme_tcp 00:15:29.766 rmmod nvme_fabrics 00:15:29.766 rmmod nvme_keyring 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74688 ']' 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74688 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74688 ']' 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74688 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74688 00:15:29.766 killing process with pid 74688 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74688' 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74688 00:15:29.766 13:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74688 00:15:30.024 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.024 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.024 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.024 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.024 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.024 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.025 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.025 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.283 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:30.284 ************************************ 00:15:30.284 END TEST nvmf_fio_host 00:15:30.284 ************************************ 00:15:30.284 00:15:30.284 real 0m8.778s 00:15:30.284 user 0m35.711s 00:15:30.284 sys 0m2.442s 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.284 ************************************ 00:15:30.284 START TEST nvmf_failover 00:15:30.284 ************************************ 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:30.284 * Looking for test storage... 00:15:30.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:30.284 Cannot find device "nvmf_tgt_br" 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.284 Cannot find device "nvmf_tgt_br2" 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:30.284 Cannot find device "nvmf_tgt_br" 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:30.284 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:30.543 Cannot find device "nvmf_tgt_br2" 00:15:30.543 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:30.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:15:30.544 00:15:30.544 --- 10.0.0.2 ping statistics --- 00:15:30.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.544 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:30.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:30.544 00:15:30.544 --- 10.0.0.3 ping statistics --- 00:15:30.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.544 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:30.544 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:15:30.803 00:15:30.803 --- 10.0.0.1 ping statistics --- 00:15:30.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.803 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:30.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75028 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75028 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75028 ']' 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.803 13:58:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:30.803 [2024-07-25 13:58:19.655601] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:30.803 [2024-07-25 13:58:19.655888] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.803 [2024-07-25 13:58:19.792817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:31.062 [2024-07-25 13:58:19.912732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.062 [2024-07-25 13:58:19.913096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.062 [2024-07-25 13:58:19.913430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.062 [2024-07-25 13:58:19.913623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.062 [2024-07-25 13:58:19.913648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.062 [2024-07-25 13:58:19.913788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.062 [2024-07-25 13:58:19.913971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.062 [2024-07-25 13:58:19.913988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.062 [2024-07-25 13:58:19.969249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:31.630 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.630 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:31.630 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.630 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.630 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:31.630 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.630 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.197 [2024-07-25 13:58:20.931591] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.197 13:58:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:32.197 Malloc0 00:15:32.455 13:58:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:32.714 13:58:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.972 13:58:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.231 [2024-07-25 13:58:22.159087] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.231 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:33.490 [2024-07-25 13:58:22.415197] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:33.490 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:33.806 [2024-07-25 13:58:22.667437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:33.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75091 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75091 /var/tmp/bdevperf.sock 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75091 ']' 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.806 13:58:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:34.742 13:58:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.742 13:58:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:34.743 13:58:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:35.001 NVMe0n1 00:15:35.001 13:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:35.567 00:15:35.567 13:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75109 00:15:35.567 13:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.567 13:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:36.501 13:58:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.760 [2024-07-25 13:58:25.668330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ba70 is same with the state(5) to be set 00:15:36.760 13:58:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:40.049 13:58:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:40.049 00:15:40.049 13:58:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:40.615 13:58:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:43.896 13:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.896 [2024-07-25 13:58:32.632537] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.896 13:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:44.829 13:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:45.088 13:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75109 00:15:51.655 0 00:15:51.655 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75091 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75091 ']' 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75091 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75091 00:15:51.656 killing process with pid 75091 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75091' 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75091 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75091 00:15:51.656 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:51.656 [2024-07-25 13:58:22.751227] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:51.656 [2024-07-25 13:58:22.751415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75091 ] 00:15:51.656 [2024-07-25 13:58:22.920474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.656 [2024-07-25 13:58:23.053016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.656 [2024-07-25 13:58:23.107980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:51.656 Running I/O for 15 seconds... 00:15:51.656 [2024-07-25 13:58:25.669223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.669542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.669663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.669756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.669846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.656 [2024-07-25 13:58:25.670007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.670089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.656 [2024-07-25 13:58:25.670169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.670245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.656 [2024-07-25 13:58:25.670344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.670428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.656 [2024-07-25 13:58:25.670514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.670582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.656 [2024-07-25 13:58:25.670670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.670738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.656 [2024-07-25 13:58:25.670820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.670900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.656 [2024-07-25 13:58:25.670977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.671058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.656 [2024-07-25 13:58:25.671139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.671213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.671345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.671427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.671508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.671583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.671665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.671733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.671814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.671889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.671973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.672039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.672136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.672216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.672321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.672405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.672492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.672567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.672661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.672735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.672813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.672879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.672964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.656 [2024-07-25 13:58:25.673039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.656 [2024-07-25 13:58:25.673119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.673193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.673273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.673389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.673473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.673550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.673632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.673713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.673792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.673867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.673945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.674019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.674101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.674168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.674248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.674340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.674424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.674501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.674580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.674646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.674729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.674805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.674882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.674950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.675025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.675110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.675192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.675259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.675358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.675456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.675533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.675600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.675680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.675755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.675843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.675920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.675986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.676062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.676176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.676255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.676356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.676439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.676520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.676617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.676741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.676857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.676968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.677090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.677225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.677362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.677449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.677526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.677609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.677703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.677857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.677983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.657 [2024-07-25 13:58:25.678075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.678155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.678244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.678398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.678530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.678634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.678714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.678791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.678877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.657 [2024-07-25 13:58:25.678972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.657 [2024-07-25 13:58:25.679109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.679207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.679337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.679440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.679596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.679725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.679853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.679954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.680047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.680137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.680226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.680373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.680495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.680634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.680768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.680893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.680985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.681405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.681435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.681464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.681493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.681531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.681562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.681592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.658 [2024-07-25 13:58:25.681620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.658 [2024-07-25 13:58:25.681722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.658 [2024-07-25 13:58:25.681735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.681751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.681771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.681787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.681807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.681836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.681852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.681867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.681880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.681896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.681910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.681926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.681939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.681960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.681985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.659 [2024-07-25 13:58:25.682324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.659 [2024-07-25 13:58:25.682359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.659 [2024-07-25 13:58:25.682391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.659 [2024-07-25 13:58:25.682446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.659 [2024-07-25 13:58:25.682486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.659 [2024-07-25 13:58:25.682522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.659 [2024-07-25 13:58:25.682571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.659 [2024-07-25 13:58:25.682619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1316e00 is same with the state(5) to be set 00:15:51.659 [2024-07-25 13:58:25.682656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.659 [2024-07-25 13:58:25.682668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.659 [2024-07-25 13:58:25.682690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82248 len:8 PRP1 0x0 PRP2 0x0 00:15:51.659 [2024-07-25 13:58:25.682714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.659 [2024-07-25 13:58:25.682751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.659 [2024-07-25 13:58:25.682763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:15:51.659 [2024-07-25 13:58:25.682776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.659 [2024-07-25 13:58:25.682800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.659 [2024-07-25 13:58:25.682810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82744 len:8 PRP1 0x0 PRP2 0x0 00:15:51.659 [2024-07-25 13:58:25.682823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.659 [2024-07-25 13:58:25.682846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.659 [2024-07-25 13:58:25.682857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82752 len:8 PRP1 0x0 PRP2 0x0 00:15:51.659 [2024-07-25 13:58:25.682869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.659 [2024-07-25 13:58:25.682883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.659 [2024-07-25 13:58:25.682893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.659 [2024-07-25 13:58:25.682903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82760 len:8 PRP1 0x0 PRP2 0x0 00:15:51.659 [2024-07-25 13:58:25.682916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.682929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.682939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.682949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.682962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.682976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.682986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.682996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82808 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82816 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82840 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82848 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82856 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.660 [2024-07-25 13:58:25.683596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.660 [2024-07-25 13:58:25.683606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:15:51.660 [2024-07-25 13:58:25.683619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683705] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1316e00 was disconnected and freed. reset controller. 00:15:51.660 [2024-07-25 13:58:25.683725] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:51.660 [2024-07-25 13:58:25.683827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.660 [2024-07-25 13:58:25.683849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.660 [2024-07-25 13:58:25.683878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.660 [2024-07-25 13:58:25.683905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.660 [2024-07-25 13:58:25.683932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.660 [2024-07-25 13:58:25.683945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:51.660 [2024-07-25 13:58:25.684030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a7570 (9): Bad file descriptor 00:15:51.661 [2024-07-25 13:58:25.688027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:51.661 [2024-07-25 13:58:25.723100] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:51.661 [2024-07-25 13:58:29.336855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.336940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.336971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.336987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.661 [2024-07-25 13:58:29.337467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.661 [2024-07-25 13:58:29.337498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.661 [2024-07-25 13:58:29.337530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.661 [2024-07-25 13:58:29.337560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.661 [2024-07-25 13:58:29.337589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.661 [2024-07-25 13:58:29.337605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.337971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.337988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.662 [2024-07-25 13:58:29.338479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.338509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.338539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.338570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.662 [2024-07-25 13:58:29.338599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.662 [2024-07-25 13:58:29.338615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.338976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.338991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.663 [2024-07-25 13:58:29.339486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.663 [2024-07-25 13:58:29.339814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.663 [2024-07-25 13:58:29.339834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.339853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.339867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.339883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.339897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.339914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.339927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.339943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.339957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.339973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.339986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.664 [2024-07-25 13:58:29.340022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.664 [2024-07-25 13:58:29.340057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.664 [2024-07-25 13:58:29.340098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.664 [2024-07-25 13:58:29.340130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.664 [2024-07-25 13:58:29.340159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.664 [2024-07-25 13:58:29.340195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.664 [2024-07-25 13:58:29.340224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.664 [2024-07-25 13:58:29.340261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.664 [2024-07-25 13:58:29.340751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13187f0 is same with the state(5) to be set 00:15:51.664 [2024-07-25 13:58:29.340786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.664 [2024-07-25 13:58:29.340796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.664 [2024-07-25 13:58:29.340808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96864 len:8 PRP1 0x0 PRP2 0x0 00:15:51.664 [2024-07-25 13:58:29.340821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.664 [2024-07-25 13:58:29.340846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.664 [2024-07-25 13:58:29.340856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:15:51.664 [2024-07-25 13:58:29.340875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.664 [2024-07-25 13:58:29.340898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.664 [2024-07-25 13:58:29.340908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:15:51.664 [2024-07-25 13:58:29.340921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.664 [2024-07-25 13:58:29.340945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.664 [2024-07-25 13:58:29.340955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:15:51.664 [2024-07-25 13:58:29.340968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.340982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.664 [2024-07-25 13:58:29.340992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.664 [2024-07-25 13:58:29.341007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:15:51.664 [2024-07-25 13:58:29.341020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.341034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.664 [2024-07-25 13:58:29.341043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.664 [2024-07-25 13:58:29.341065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:15:51.664 [2024-07-25 13:58:29.341079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.664 [2024-07-25 13:58:29.341093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.664 [2024-07-25 13:58:29.341103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.665 [2024-07-25 13:58:29.341113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:15:51.665 [2024-07-25 13:58:29.341126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:29.341140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.665 [2024-07-25 13:58:29.341150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.665 [2024-07-25 13:58:29.341160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:15:51.665 [2024-07-25 13:58:29.341172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:29.341186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.665 [2024-07-25 13:58:29.341196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.665 [2024-07-25 13:58:29.341206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:15:51.665 [2024-07-25 13:58:29.341219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:29.341297] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13187f0 was disconnected and freed. reset controller. 00:15:51.665 [2024-07-25 13:58:29.341332] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:51.665 [2024-07-25 13:58:29.341410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.665 [2024-07-25 13:58:29.341433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:29.341448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.665 [2024-07-25 13:58:29.341462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:29.341476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.665 [2024-07-25 13:58:29.341489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:29.341504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.665 [2024-07-25 13:58:29.341518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:29.341532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:51.665 [2024-07-25 13:58:29.345540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:51.665 [2024-07-25 13:58:29.345604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a7570 (9): Bad file descriptor 00:15:51.665 [2024-07-25 13:58:29.381890] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:51.665 [2024-07-25 13:58:33.966213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.966582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.966611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.966640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.966669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.966700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.966738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.966768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.966797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.966981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.966996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.967010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.967026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.665 [2024-07-25 13:58:33.967040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.967055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.967068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.967083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.967097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.967212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.967231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.967246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.665 [2024-07-25 13:58:33.967261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.665 [2024-07-25 13:58:33.967276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.967849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.666 [2024-07-25 13:58:33.967898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.666 [2024-07-25 13:58:33.967946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.967971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.666 [2024-07-25 13:58:33.967993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.666 [2024-07-25 13:58:33.968044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.666 [2024-07-25 13:58:33.968106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.666 [2024-07-25 13:58:33.968158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.666 [2024-07-25 13:58:33.968209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.666 [2024-07-25 13:58:33.968271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.666 [2024-07-25 13:58:33.968932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.666 [2024-07-25 13:58:33.968957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.968978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.969897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.969977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.969999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.970046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.970093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.970141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.970188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.970236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:51.667 [2024-07-25 13:58:33.970283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.667 [2024-07-25 13:58:33.970929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.667 [2024-07-25 13:58:33.970950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.970974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.668 [2024-07-25 13:58:33.971469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13184b0 is same with the state(5) to be set 00:15:51.668 [2024-07-25 13:58:33.971524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.971540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.971564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44784 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.971586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.971627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.971654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45176 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.971677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.971716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.971734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45184 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.971756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.971792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.971808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45192 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.971828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.971865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.971881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45200 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.971901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.971923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.971940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.971958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45208 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.971980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45216 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45224 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45232 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44792 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44800 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44808 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.668 [2024-07-25 13:58:33.972859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:51.668 [2024-07-25 13:58:33.972875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:51.668 [2024-07-25 13:58:33.972897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:15:51.668 [2024-07-25 13:58:33.972918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.669 [2024-07-25 13:58:33.973032] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13184b0 was disconnected and freed. reset controller. 00:15:51.669 [2024-07-25 13:58:33.973061] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:51.669 [2024-07-25 13:58:33.973185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.669 [2024-07-25 13:58:33.973221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.669 [2024-07-25 13:58:33.973247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.669 [2024-07-25 13:58:33.973269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.669 [2024-07-25 13:58:33.973291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.669 [2024-07-25 13:58:33.973341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.669 [2024-07-25 13:58:33.973365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.669 [2024-07-25 13:58:33.973389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.669 [2024-07-25 13:58:33.973411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:51.669 [2024-07-25 13:58:33.973502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a7570 (9): Bad file descriptor 00:15:51.669 [2024-07-25 13:58:33.978078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:51.669 [2024-07-25 13:58:34.022788] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:51.669 00:15:51.669 Latency(us) 00:15:51.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.669 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:51.669 Verification LBA range: start 0x0 length 0x4000 00:15:51.669 NVMe0n1 : 15.01 9061.48 35.40 223.66 0.00 13751.67 685.15 23473.80 00:15:51.669 =================================================================================================================== 00:15:51.669 Total : 9061.48 35.40 223.66 0.00 13751.67 685.15 23473.80 00:15:51.669 Received shutdown signal, test time was about 15.000000 seconds 00:15:51.669 00:15:51.669 Latency(us) 00:15:51.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.669 =================================================================================================================== 00:15:51.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:51.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75288 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75288 /var/tmp/bdevperf.sock 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75288 ']' 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.669 13:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:51.928 13:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.928 13:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:51.928 13:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:52.186 [2024-07-25 13:58:41.123760] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:52.186 13:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:52.444 [2024-07-25 13:58:41.360021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:52.444 13:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.711 NVMe0n1 00:15:52.711 13:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.968 00:15:52.968 13:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.533 00:15:53.533 13:58:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.533 13:58:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:53.791 13:58:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:54.049 13:58:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:57.331 13:58:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:57.331 13:58:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:57.331 13:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75370 00:15:57.331 13:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75370 00:15:57.331 13:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.705 0 00:15:58.705 13:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:58.705 [2024-07-25 13:58:39.895827] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:58.705 [2024-07-25 13:58:39.895979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75288 ] 00:15:58.705 [2024-07-25 13:58:40.029220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.705 [2024-07-25 13:58:40.149049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.705 [2024-07-25 13:58:40.202293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:58.705 [2024-07-25 13:58:42.904441] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:58.705 [2024-07-25 13:58:42.904593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.705 [2024-07-25 13:58:42.904619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.705 [2024-07-25 13:58:42.904639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.705 [2024-07-25 13:58:42.904652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.705 [2024-07-25 13:58:42.904666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.705 [2024-07-25 13:58:42.904679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.705 [2024-07-25 13:58:42.904694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.705 [2024-07-25 13:58:42.904707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.705 [2024-07-25 13:58:42.904721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:58.705 [2024-07-25 13:58:42.904776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:58.705 [2024-07-25 13:58:42.904808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x705570 (9): Bad file descriptor 00:15:58.705 [2024-07-25 13:58:42.913205] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:58.705 Running I/O for 1 seconds... 00:15:58.705 00:15:58.705 Latency(us) 00:15:58.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.705 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.705 Verification LBA range: start 0x0 length 0x4000 00:15:58.705 NVMe0n1 : 1.01 6614.32 25.84 0.00 0.00 19267.31 3425.75 15192.44 00:15:58.705 =================================================================================================================== 00:15:58.705 Total : 6614.32 25.84 0.00 0.00 19267.31 3425.75 15192.44 00:15:58.705 13:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:58.705 13:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:58.705 13:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:58.964 13:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:58.964 13:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:59.222 13:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:59.498 13:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75288 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75288 ']' 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75288 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75288 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.782 killing process with pid 75288 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75288' 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75288 00:16:02.782 13:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75288 00:16:03.349 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:03.349 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.607 rmmod nvme_tcp 00:16:03.607 rmmod nvme_fabrics 00:16:03.607 rmmod nvme_keyring 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75028 ']' 00:16:03.607 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75028 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75028 ']' 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75028 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75028 00:16:03.608 killing process with pid 75028 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75028' 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75028 00:16:03.608 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75028 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:03.920 ************************************ 00:16:03.920 END TEST nvmf_failover 00:16:03.920 ************************************ 00:16:03.920 00:16:03.920 real 0m33.774s 00:16:03.920 user 2m10.947s 00:16:03.920 sys 0m5.775s 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.920 13:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:04.180 13:58:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:04.181 13:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.181 13:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.181 13:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.181 ************************************ 00:16:04.181 START TEST nvmf_host_discovery 00:16:04.181 ************************************ 00:16:04.181 13:58:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:04.181 * Looking for test storage... 00:16:04.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:04.181 Cannot find device "nvmf_tgt_br" 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.181 Cannot find device "nvmf_tgt_br2" 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:04.181 Cannot find device "nvmf_tgt_br" 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:04.181 Cannot find device "nvmf_tgt_br2" 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:04.181 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:04.441 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:04.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:16:04.442 00:16:04.442 --- 10.0.0.2 ping statistics --- 00:16:04.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.442 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:04.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:04.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:04.442 00:16:04.442 --- 10.0.0.3 ping statistics --- 00:16:04.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.442 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:04.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:16:04.442 00:16:04.442 --- 10.0.0.1 ping statistics --- 00:16:04.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.442 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75643 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75643 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75643 ']' 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.442 13:58:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.700 [2024-07-25 13:58:53.507733] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:04.700 [2024-07-25 13:58:53.507833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.700 [2024-07-25 13:58:53.647014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.959 [2024-07-25 13:58:53.767069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.959 [2024-07-25 13:58:53.767133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.959 [2024-07-25 13:58:53.767145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.959 [2024-07-25 13:58:53.767154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.959 [2024-07-25 13:58:53.767161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.959 [2024-07-25 13:58:53.767190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.959 [2024-07-25 13:58:53.820731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.526 [2024-07-25 13:58:54.496716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.526 [2024-07-25 13:58:54.504859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.526 null0 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.526 null1 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.526 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75674 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75674 /tmp/host.sock 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75674 ']' 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.526 13:58:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.785 [2024-07-25 13:58:54.599758] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:05.785 [2024-07-25 13:58:54.600181] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75674 ] 00:16:05.785 [2024-07-25 13:58:54.742519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.045 [2024-07-25 13:58:54.863652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.045 [2024-07-25 13:58:54.918466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:06.981 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:06.982 13:58:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.241 [2024-07-25 13:58:56.045195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:07.241 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:07.242 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.500 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:16:07.500 13:58:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:16:07.759 [2024-07-25 13:58:56.680122] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:07.759 [2024-07-25 13:58:56.680173] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:07.759 [2024-07-25 13:58:56.680193] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:07.759 [2024-07-25 13:58:56.686178] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:07.759 [2024-07-25 13:58:56.743856] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:07.759 [2024-07-25 13:58:56.743903] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:08.326 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.586 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:08.587 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.847 [2024-07-25 13:58:57.626676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:08.847 [2024-07-25 13:58:57.626906] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:08.847 [2024-07-25 13:58:57.626939] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:08.847 [2024-07-25 13:58:57.632901] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:08.847 [2024-07-25 13:58:57.693198] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:08.847 [2024-07-25 13:58:57.693228] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:08.847 [2024-07-25 13:58:57.693236] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:08.847 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.848 [2024-07-25 13:58:57.859200] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:08.848 [2024-07-25 13:58:57.859243] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:08.848 [2024-07-25 13:58:57.861053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.848 [2024-07-25 13:58:57.861097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.848 [2024-07-25 13:58:57.861112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.848 [2024-07-25 13:58:57.861122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.848 [2024-07-25 13:58:57.861133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.848 [2024-07-25 13:58:57.861142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.848 [2024-07-25 13:58:57.861153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.848 [2024-07-25 13:58:57.861162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.848 [2024-07-25 13:58:57.861172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232620 is same with the state(5) to be set 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:08.848 [2024-07-25 13:58:57.865249] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:08.848 [2024-07-25 13:58:57.865283] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:08.848 [2024-07-25 13:58:57.865375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1232620 (9): Bad file descriptor 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:08.848 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:09.108 13:58:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.108 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.367 13:58:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.369 [2024-07-25 13:58:59.280743] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:10.369 [2024-07-25 13:58:59.280786] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:10.369 [2024-07-25 13:58:59.280806] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:10.369 [2024-07-25 13:58:59.286781] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:10.369 [2024-07-25 13:58:59.347561] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:10.369 [2024-07-25 13:58:59.347627] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.369 request: 00:16:10.369 { 00:16:10.369 "name": "nvme", 00:16:10.369 "trtype": "tcp", 00:16:10.369 "traddr": "10.0.0.2", 00:16:10.369 "adrfam": "ipv4", 00:16:10.369 "trsvcid": "8009", 00:16:10.369 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:10.369 "wait_for_attach": true, 00:16:10.369 "method": "bdev_nvme_start_discovery", 00:16:10.369 "req_id": 1 00:16:10.369 } 00:16:10.369 Got JSON-RPC error response 00:16:10.369 response: 00:16:10.369 { 00:16:10.369 "code": -17, 00:16:10.369 "message": "File exists" 00:16:10.369 } 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:10.369 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.629 request: 00:16:10.629 { 00:16:10.629 "name": "nvme_second", 00:16:10.629 "trtype": "tcp", 00:16:10.629 "traddr": "10.0.0.2", 00:16:10.629 "adrfam": "ipv4", 00:16:10.629 "trsvcid": "8009", 00:16:10.629 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:10.629 "wait_for_attach": true, 00:16:10.629 "method": "bdev_nvme_start_discovery", 00:16:10.629 "req_id": 1 00:16:10.629 } 00:16:10.629 Got JSON-RPC error response 00:16:10.629 response: 00:16:10.629 { 00:16:10.629 "code": -17, 00:16:10.629 "message": "File exists" 00:16:10.629 } 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.629 13:58:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.005 [2024-07-25 13:59:00.616332] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:12.005 [2024-07-25 13:59:00.616415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126ec30 with addr=10.0.0.2, port=8010 00:16:12.005 [2024-07-25 13:59:00.616446] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:12.005 [2024-07-25 13:59:00.616458] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:12.005 [2024-07-25 13:59:00.616468] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:12.941 [2024-07-25 13:59:01.616325] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:12.941 [2024-07-25 13:59:01.616417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126ec30 with addr=10.0.0.2, port=8010 00:16:12.941 [2024-07-25 13:59:01.616443] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:12.941 [2024-07-25 13:59:01.616455] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:12.941 [2024-07-25 13:59:01.616464] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:13.876 [2024-07-25 13:59:02.616157] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:13.876 request: 00:16:13.876 { 00:16:13.876 "name": "nvme_second", 00:16:13.876 "trtype": "tcp", 00:16:13.876 "traddr": "10.0.0.2", 00:16:13.876 "adrfam": "ipv4", 00:16:13.876 "trsvcid": "8010", 00:16:13.876 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:13.876 "wait_for_attach": false, 00:16:13.876 "attach_timeout_ms": 3000, 00:16:13.876 "method": "bdev_nvme_start_discovery", 00:16:13.876 "req_id": 1 00:16:13.876 } 00:16:13.876 Got JSON-RPC error response 00:16:13.876 response: 00:16:13.876 { 00:16:13.876 "code": -110, 00:16:13.876 "message": "Connection timed out" 00:16:13.876 } 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75674 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:13.876 rmmod nvme_tcp 00:16:13.876 rmmod nvme_fabrics 00:16:13.876 rmmod nvme_keyring 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75643 ']' 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75643 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75643 ']' 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75643 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75643 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:13.876 killing process with pid 75643 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75643' 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75643 00:16:13.876 13:59:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75643 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:14.135 00:16:14.135 real 0m10.127s 00:16:14.135 user 0m19.514s 00:16:14.135 sys 0m2.060s 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 ************************************ 00:16:14.135 END TEST nvmf_host_discovery 00:16:14.135 ************************************ 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 ************************************ 00:16:14.135 START TEST nvmf_host_multipath_status 00:16:14.135 ************************************ 00:16:14.135 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:14.394 * Looking for test storage... 00:16:14.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:14.394 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.394 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:14.394 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.394 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:14.395 Cannot find device "nvmf_tgt_br" 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.395 Cannot find device "nvmf_tgt_br2" 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:14.395 Cannot find device "nvmf_tgt_br" 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:14.395 Cannot find device "nvmf_tgt_br2" 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.395 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.396 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.396 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.396 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.396 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.396 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:14.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:14.654 00:16:14.654 --- 10.0.0.2 ping statistics --- 00:16:14.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.654 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:14.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:14.654 00:16:14.654 --- 10.0.0.3 ping statistics --- 00:16:14.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.654 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:14.654 00:16:14.654 --- 10.0.0.1 ping statistics --- 00:16:14.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.654 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76126 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76126 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76126 ']' 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.654 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.655 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.655 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.655 13:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:14.655 [2024-07-25 13:59:03.628240] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:14.655 [2024-07-25 13:59:03.629063] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.913 [2024-07-25 13:59:03.765459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:14.913 [2024-07-25 13:59:03.898038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.913 [2024-07-25 13:59:03.898668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.913 [2024-07-25 13:59:03.898800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.913 [2024-07-25 13:59:03.898818] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.913 [2024-07-25 13:59:03.898828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.913 [2024-07-25 13:59:03.899514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.913 [2024-07-25 13:59:03.899526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.171 [2024-07-25 13:59:03.956989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:15.737 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.737 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:15.737 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.737 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:15.737 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:15.737 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.737 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76126 00:16:15.737 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:16.006 [2024-07-25 13:59:04.902271] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.006 13:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:16.290 Malloc0 00:16:16.290 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:16.548 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:16.806 13:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.064 [2024-07-25 13:59:06.061091] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.064 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:17.631 [2024-07-25 13:59:06.397369] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76186 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76186 /var/tmp/bdevperf.sock 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76186 ']' 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.631 13:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:18.564 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:18.564 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:18.564 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:18.822 13:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:19.080 Nvme0n1 00:16:19.080 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:19.728 Nvme0n1 00:16:19.728 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:19.728 13:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:21.630 13:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:21.630 13:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:21.888 13:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:22.146 13:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:23.079 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:23.079 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:23.079 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.079 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.376 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.376 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.376 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.376 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:23.635 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.635 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:23.635 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.635 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:24.201 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.201 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:24.201 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:24.201 13:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.201 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.201 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:24.201 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.201 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:24.459 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.459 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:24.459 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.459 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:24.718 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.718 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:24.718 13:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:25.285 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:25.544 13:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:26.481 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:26.481 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:26.481 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.481 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:26.739 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:26.739 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:26.739 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.739 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:26.997 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.997 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:26.997 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.997 13:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:27.304 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.304 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:27.304 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.304 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:27.569 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.569 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:27.569 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.569 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:27.828 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.828 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:28.086 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.086 13:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:28.345 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.345 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:28.345 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:28.603 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:28.861 13:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:29.797 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:29.797 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:29.797 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.797 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:30.056 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.056 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:30.056 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:30.056 13:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.314 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.314 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:30.314 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:30.314 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.573 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.573 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:30.573 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.573 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:30.832 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.832 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:30.832 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.832 13:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:31.089 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.089 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:31.089 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.089 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:31.347 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.347 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:31.347 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:31.605 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:31.864 13:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:33.240 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:33.240 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:33.240 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.240 13:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:33.240 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.240 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:33.240 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:33.240 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.499 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:33.499 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:33.499 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.499 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:34.066 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.066 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:34.066 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:34.066 13:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.066 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.066 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:34.066 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.066 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:34.642 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.642 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:34.642 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.642 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:34.899 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.899 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:34.899 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:35.158 13:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:35.416 13:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:36.351 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:36.351 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:36.351 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.351 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:36.610 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:36.610 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:36.610 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.610 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.869 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:36.869 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.869 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.869 13:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:37.129 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.129 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:37.129 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.129 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:37.389 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.389 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:37.389 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.389 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:37.681 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:37.681 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:37.681 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.681 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:37.970 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:37.970 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:37.970 13:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:38.228 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:38.488 13:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:39.421 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:39.421 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:39.421 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.421 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:39.679 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:39.679 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:39.679 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.679 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.937 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.937 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.937 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.937 13:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:40.195 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.195 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:40.195 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.195 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:40.453 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.453 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:40.453 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.453 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:40.711 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:40.711 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:40.711 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.711 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.969 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.969 13:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:41.227 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:41.227 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:41.484 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:42.050 13:59:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:42.985 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:42.985 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:42.985 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.985 13:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:43.244 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.244 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:43.244 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.244 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:43.539 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.539 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:43.539 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.539 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:43.798 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.798 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:43.798 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.798 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:44.056 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.056 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:44.056 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.056 13:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:44.314 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.314 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:44.315 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.315 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:44.573 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.573 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:44.573 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:44.832 13:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:45.091 13:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:46.466 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:46.466 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:46.466 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.466 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:46.467 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.467 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:46.467 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.467 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:46.725 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.725 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:46.725 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.725 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:46.986 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.986 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:46.986 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.986 13:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:47.553 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.553 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:47.553 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.553 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:47.812 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.812 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:47.812 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.812 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:48.070 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.070 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:48.070 13:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:48.328 13:59:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:48.586 13:59:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:49.614 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:49.614 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:49.614 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.614 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:50.181 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.181 13:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:50.181 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.181 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:50.439 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.439 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:50.439 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.439 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:50.697 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.698 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:50.698 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.698 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:50.956 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.956 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:50.956 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.956 13:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:51.215 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.215 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:51.215 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.215 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:51.473 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.473 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:51.473 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:51.731 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:51.989 13:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:52.931 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:52.931 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:52.931 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.931 13:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:53.496 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.496 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:53.496 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.496 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:53.754 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:53.754 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:53.754 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.754 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:54.012 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.012 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:54.012 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.012 13:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:54.271 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.271 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:54.271 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.271 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:54.539 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.539 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:54.539 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.539 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76186 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76186 ']' 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76186 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76186 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:55.106 killing process with pid 76186 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76186' 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76186 00:16:55.106 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76186 00:16:55.106 Connection closed with partial response: 00:16:55.106 00:16:55.106 00:16:55.371 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76186 00:16:55.371 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:55.371 [2024-07-25 13:59:06.489038] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:55.371 [2024-07-25 13:59:06.489330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76186 ] 00:16:55.371 [2024-07-25 13:59:06.627994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.371 [2024-07-25 13:59:06.774334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.371 [2024-07-25 13:59:06.831181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:55.371 Running I/O for 90 seconds... 00:16:55.371 [2024-07-25 13:59:23.940449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.371 [2024-07-25 13:59:23.940544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:55.371 [2024-07-25 13:59:23.940611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.371 [2024-07-25 13:59:23.940634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:55.371 [2024-07-25 13:59:23.940660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.371 [2024-07-25 13:59:23.940675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:55.371 [2024-07-25 13:59:23.940697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.371 [2024-07-25 13:59:23.940712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.940734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.940750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.940771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.940786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.940809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.940823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.940845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.940859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.940889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.940904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.940925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.940940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.940962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.372 [2024-07-25 13:59:23.941835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.941975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.941991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.942013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.942028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:55.372 [2024-07-25 13:59:23.942050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.372 [2024-07-25 13:59:23.942065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.942103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.942139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.373 [2024-07-25 13:59:23.942786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.942823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.942859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.942896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.942951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.942973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.942988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:55.373 [2024-07-25 13:59:23.943453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.373 [2024-07-25 13:59:23.943469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.943970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.943993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.944482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.944734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.944748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.945499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.374 [2024-07-25 13:59:23.945528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.945564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.374 [2024-07-25 13:59:23.945581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:55.374 [2024-07-25 13:59:23.945618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.945633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.945693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.945724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.945739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.945769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.945784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.945815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.945831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.945861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.945876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.945985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.946009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.946060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.946106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.946159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.946208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.946253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:23.946314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:23.946380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:23.946429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:23.946474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:23.946520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:23.946566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:23.946611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:23.946657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:23.946688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:23.946703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:40.874492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:40.874568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:40.874609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:40.874644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.375 [2024-07-25 13:59:40.874680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:55.375 [2024-07-25 13:59:40.874879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.375 [2024-07-25 13:59:40.874893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.874913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.874927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.874948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.376 [2024-07-25 13:59:40.874963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.874984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.376 [2024-07-25 13:59:40.874999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.376 [2024-07-25 13:59:40.875268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.376 [2024-07-25 13:59:40.875317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.376 [2024-07-25 13:59:40.875357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.376 [2024-07-25 13:59:40.875392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.376 [2024-07-25 13:59:40.875429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.376 [2024-07-25 13:59:40.875794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.376 [2024-07-25 13:59:40.875829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:55.376 [2024-07-25 13:59:40.875850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.875864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.875885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.875899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.875923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.875947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.875970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.875986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.876020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.876056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.876091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.876250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.876285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.876529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.876567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.876603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.876625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.876640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.878593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.878639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.878676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.878713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.878748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.878784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.878837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.878873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.878910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.878946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.878968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.878983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.879004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.879019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.879040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.879055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.879077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.377 [2024-07-25 13:59:40.879092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:55.377 [2024-07-25 13:59:40.879114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.377 [2024-07-25 13:59:40.879129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.879888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.879981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.879996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.378 [2024-07-25 13:59:40.880377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.880413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.880435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.880450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.883167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.378 [2024-07-25 13:59:40.883197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:55.378 [2024-07-25 13:59:40.883225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.883869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.883974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.883988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.884024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.884060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.884107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.884152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.884189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.884225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.884260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.884296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.884362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.884398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.379 [2024-07-25 13:59:40.884444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.884483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.884519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.884556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:55.379 [2024-07-25 13:59:40.884594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.379 [2024-07-25 13:59:40.884613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.884650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.884686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.884722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.884759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.884794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.884831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.884867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.884912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.884951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.884972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.884987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.885008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.885022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.885044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.885058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.885080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.885095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.886984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.887139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.887175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.887211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.887247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.887358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.887395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.887517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.887690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.887705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.889564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.889592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.889644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.889662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.889685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.380 [2024-07-25 13:59:40.889700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:55.380 [2024-07-25 13:59:40.889722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.380 [2024-07-25 13:59:40.889737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.889758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.889774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.889795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.889810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.889831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.889845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.889867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.889881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.889902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.889917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.889941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.889956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.889977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.889992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.381 [2024-07-25 13:59:40.890690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.381 [2024-07-25 13:59:40.890763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:55.381 [2024-07-25 13:59:40.890784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.382 [2024-07-25 13:59:40.890799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.890820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.382 [2024-07-25 13:59:40.890835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.890856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.382 [2024-07-25 13:59:40.890870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.890892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.382 [2024-07-25 13:59:40.890906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.890927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.382 [2024-07-25 13:59:40.890942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.890963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.382 [2024-07-25 13:59:40.890977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.890998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.382 [2024-07-25 13:59:40.891021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.891044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.382 [2024-07-25 13:59:40.891059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.891080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.382 [2024-07-25 13:59:40.891094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.891116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.382 [2024-07-25 13:59:40.891131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.893090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.382 [2024-07-25 13:59:40.893119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:55.382 [2024-07-25 13:59:40.893166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.893185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.893223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.893632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.893830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.893866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.893938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.893959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.893973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.894089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.894216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.894254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.894381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.894417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.383 [2024-07-25 13:59:40.894525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.383 [2024-07-25 13:59:40.894731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:55.383 [2024-07-25 13:59:40.894752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.384 [2024-07-25 13:59:40.894768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.894789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.384 [2024-07-25 13:59:40.894803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.894825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.384 [2024-07-25 13:59:40.894839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.894861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.384 [2024-07-25 13:59:40.894875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.894896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.384 [2024-07-25 13:59:40.894911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.894932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.384 [2024-07-25 13:59:40.894947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.894968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.384 [2024-07-25 13:59:40.894982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.895003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.384 [2024-07-25 13:59:40.895018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.895039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.384 [2024-07-25 13:59:40.907726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.907870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.384 [2024-07-25 13:59:40.907919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.907946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.384 [2024-07-25 13:59:40.907962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.907985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.384 [2024-07-25 13:59:40.907999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.908024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.384 [2024-07-25 13:59:40.908039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:55.384 [2024-07-25 13:59:40.908065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.384 [2024-07-25 13:59:40.908080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:55.384 Received shutdown signal, test time was about 35.312607 seconds 00:16:55.384 00:16:55.384 Latency(us) 00:16:55.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.384 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:55.384 Verification LBA range: start 0x0 length 0x4000 00:16:55.384 Nvme0n1 : 35.31 8367.74 32.69 0.00 0.00 15263.53 262.52 4026531.84 00:16:55.384 =================================================================================================================== 00:16:55.384 Total : 8367.74 32.69 0.00 0.00 15263.53 262.52 4026531.84 00:16:55.384 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.642 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.643 rmmod nvme_tcp 00:16:55.643 rmmod nvme_fabrics 00:16:55.643 rmmod nvme_keyring 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76126 ']' 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76126 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76126 ']' 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76126 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76126 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.643 killing process with pid 76126 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76126' 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76126 00:16:55.643 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76126 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:55.901 00:16:55.901 real 0m41.763s 00:16:55.901 user 2m15.138s 00:16:55.901 sys 0m12.621s 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.901 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:55.901 ************************************ 00:16:55.901 END TEST nvmf_host_multipath_status 00:16:55.901 ************************************ 00:16:56.160 13:59:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:56.160 13:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:56.160 13:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:56.160 13:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.160 ************************************ 00:16:56.160 START TEST nvmf_discovery_remove_ifc 00:16:56.160 ************************************ 00:16:56.160 13:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:56.160 * Looking for test storage... 00:16:56.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.160 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:56.161 Cannot find device "nvmf_tgt_br" 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.161 Cannot find device "nvmf_tgt_br2" 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:56.161 Cannot find device "nvmf_tgt_br" 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:56.161 Cannot find device "nvmf_tgt_br2" 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:56.161 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:56.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:16:56.419 00:16:56.419 --- 10.0.0.2 ping statistics --- 00:16:56.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.419 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:56.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:16:56.419 00:16:56.419 --- 10.0.0.3 ping statistics --- 00:16:56.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.419 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:16:56.419 00:16:56.419 --- 10.0.0.1 ping statistics --- 00:16:56.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.419 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.419 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76987 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76987 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76987 ']' 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.420 13:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.677 [2024-07-25 13:59:45.501713] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:56.677 [2024-07-25 13:59:45.501840] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.677 [2024-07-25 13:59:45.647656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.934 [2024-07-25 13:59:45.813922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.934 [2024-07-25 13:59:45.813984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.934 [2024-07-25 13:59:45.813998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.934 [2024-07-25 13:59:45.814009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.934 [2024-07-25 13:59:45.814019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.934 [2024-07-25 13:59:45.814056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.934 [2024-07-25 13:59:45.882430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.869 [2024-07-25 13:59:46.604154] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.869 [2024-07-25 13:59:46.612328] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:57.869 null0 00:16:57.869 [2024-07-25 13:59:46.644284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77021 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77021 /tmp/host.sock 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77021 ']' 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.869 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.869 13:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.869 [2024-07-25 13:59:46.729101] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:57.869 [2024-07-25 13:59:46.729211] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77021 ] 00:16:57.869 [2024-07-25 13:59:46.866134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.127 [2024-07-25 13:59:47.014219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.694 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.952 [2024-07-25 13:59:47.779771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:58.952 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.952 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:58.952 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.952 13:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:59.888 [2024-07-25 13:59:48.837751] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:59.888 [2024-07-25 13:59:48.837798] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:59.888 [2024-07-25 13:59:48.837829] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:59.888 [2024-07-25 13:59:48.843806] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:59.888 [2024-07-25 13:59:48.901282] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:59.888 [2024-07-25 13:59:48.901382] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:59.888 [2024-07-25 13:59:48.901414] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:59.888 [2024-07-25 13:59:48.901434] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:59.888 [2024-07-25 13:59:48.901463] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.888 [2024-07-25 13:59:48.906141] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1663ef0 was disconnected and freed. delete nvme_qpair. 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.888 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:00.176 13:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.176 13:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:00.176 13:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:01.147 13:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:02.081 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:02.081 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.081 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.081 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:02.081 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:02.081 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:02.081 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:02.338 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.338 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:02.338 13:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:03.272 13:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:04.218 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:04.218 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:04.218 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:04.218 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.218 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.219 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:04.219 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:04.493 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.493 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:04.493 13:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:05.429 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:05.429 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:05.429 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.429 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:05.429 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:05.429 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.429 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:05.429 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.429 [2024-07-25 13:59:54.328786] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:05.429 [2024-07-25 13:59:54.328865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.429 [2024-07-25 13:59:54.328883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.429 [2024-07-25 13:59:54.328896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.429 [2024-07-25 13:59:54.328906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.429 [2024-07-25 13:59:54.328916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.429 [2024-07-25 13:59:54.328925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.429 [2024-07-25 13:59:54.328936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.429 [2024-07-25 13:59:54.328945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.430 [2024-07-25 13:59:54.328956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.430 [2024-07-25 13:59:54.328965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.430 [2024-07-25 13:59:54.328974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9ac0 is same with the state(5) to be set 00:17:05.430 [2024-07-25 13:59:54.338789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c9ac0 (9): Bad file descriptor 00:17:05.430 [2024-07-25 13:59:54.348812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:05.430 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:05.430 13:59:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:06.365 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:06.365 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.365 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:06.365 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:06.365 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.365 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:06.365 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.365 [2024-07-25 13:59:55.385372] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:06.365 [2024-07-25 13:59:55.385510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c9ac0 with addr=10.0.0.2, port=4420 00:17:06.365 [2024-07-25 13:59:55.385549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9ac0 is same with the state(5) to be set 00:17:06.365 [2024-07-25 13:59:55.385617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c9ac0 (9): Bad file descriptor 00:17:06.365 [2024-07-25 13:59:55.386254] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.365 [2024-07-25 13:59:55.386358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:06.365 [2024-07-25 13:59:55.386386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:06.365 [2024-07-25 13:59:55.386407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:06.365 [2024-07-25 13:59:55.386450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:06.365 [2024-07-25 13:59:55.386477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:06.623 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.623 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:06.623 13:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:07.559 [2024-07-25 13:59:56.386547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:07.559 [2024-07-25 13:59:56.386625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:07.559 [2024-07-25 13:59:56.386641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:07.559 [2024-07-25 13:59:56.386652] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:07.559 [2024-07-25 13:59:56.386678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.559 [2024-07-25 13:59:56.386733] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:07.559 [2024-07-25 13:59:56.386808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.559 [2024-07-25 13:59:56.386827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.559 [2024-07-25 13:59:56.386841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.559 [2024-07-25 13:59:56.386852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.559 [2024-07-25 13:59:56.386862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.559 [2024-07-25 13:59:56.386872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.559 [2024-07-25 13:59:56.386882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.559 [2024-07-25 13:59:56.386892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.559 [2024-07-25 13:59:56.386902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:07.559 [2024-07-25 13:59:56.386911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.559 [2024-07-25 13:59:56.386921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:07.559 [2024-07-25 13:59:56.386969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cd860 (9): Bad file descriptor 00:17:07.559 [2024-07-25 13:59:56.387957] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:07.559 [2024-07-25 13:59:56.387980] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:07.559 13:59:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:08.933 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:08.933 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:08.933 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.933 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.933 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.934 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:08.934 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:08.934 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.934 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:08.934 13:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:09.501 [2024-07-25 13:59:58.392719] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:09.501 [2024-07-25 13:59:58.392772] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:09.501 [2024-07-25 13:59:58.392794] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:09.501 [2024-07-25 13:59:58.398767] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:09.501 [2024-07-25 13:59:58.455470] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:09.501 [2024-07-25 13:59:58.455856] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:09.501 [2024-07-25 13:59:58.456059] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:09.501 [2024-07-25 13:59:58.456246] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:09.501 [2024-07-25 13:59:58.456427] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:09.501 [2024-07-25 13:59:58.461555] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1641460 was disconnected and freed. delete nvme_qpair. 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77021 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77021 ']' 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77021 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77021 00:17:09.760 killing process with pid 77021 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77021' 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77021 00:17:09.760 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77021 00:17:10.019 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:10.019 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:10.019 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:10.019 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:10.019 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:10.019 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.019 13:59:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:10.019 rmmod nvme_tcp 00:17:10.019 rmmod nvme_fabrics 00:17:10.019 rmmod nvme_keyring 00:17:10.019 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76987 ']' 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76987 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76987 ']' 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76987 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76987 00:17:10.277 killing process with pid 76987 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76987' 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76987 00:17:10.277 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76987 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:10.536 00:17:10.536 real 0m14.398s 00:17:10.536 user 0m24.795s 00:17:10.536 sys 0m2.611s 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:10.536 ************************************ 00:17:10.536 END TEST nvmf_discovery_remove_ifc 00:17:10.536 ************************************ 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.536 ************************************ 00:17:10.536 START TEST nvmf_identify_kernel_target 00:17:10.536 ************************************ 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:10.536 * Looking for test storage... 00:17:10.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:10.536 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:10.537 Cannot find device "nvmf_tgt_br" 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.537 Cannot find device "nvmf_tgt_br2" 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:10.537 Cannot find device "nvmf_tgt_br" 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:10.537 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:10.795 Cannot find device "nvmf_tgt_br2" 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:10.795 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:11.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:17:11.054 00:17:11.054 --- 10.0.0.2 ping statistics --- 00:17:11.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.054 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:11.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:11.054 00:17:11.054 --- 10.0.0.3 ping statistics --- 00:17:11.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.054 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:17:11.054 00:17:11.054 --- 10.0.0.1 ping statistics --- 00:17:11.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.054 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:11.054 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:11.055 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:11.055 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:11.055 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:11.055 13:59:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:11.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.312 Waiting for block devices as requested 00:17:11.312 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:11.570 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:11.570 No valid GPT data, bailing 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:11.570 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:11.828 No valid GPT data, bailing 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:11.828 No valid GPT data, bailing 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:11.828 No valid GPT data, bailing 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:11.828 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:12.086 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid=71427938-e211-49fa-b6ad-486cdab0bd89 -a 10.0.0.1 -t tcp -s 4420 00:17:12.086 00:17:12.086 Discovery Log Number of Records 2, Generation counter 2 00:17:12.086 =====Discovery Log Entry 0====== 00:17:12.087 trtype: tcp 00:17:12.087 adrfam: ipv4 00:17:12.087 subtype: current discovery subsystem 00:17:12.087 treq: not specified, sq flow control disable supported 00:17:12.087 portid: 1 00:17:12.087 trsvcid: 4420 00:17:12.087 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:12.087 traddr: 10.0.0.1 00:17:12.087 eflags: none 00:17:12.087 sectype: none 00:17:12.087 =====Discovery Log Entry 1====== 00:17:12.087 trtype: tcp 00:17:12.087 adrfam: ipv4 00:17:12.087 subtype: nvme subsystem 00:17:12.087 treq: not specified, sq flow control disable supported 00:17:12.087 portid: 1 00:17:12.087 trsvcid: 4420 00:17:12.087 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:12.087 traddr: 10.0.0.1 00:17:12.087 eflags: none 00:17:12.087 sectype: none 00:17:12.087 14:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:12.087 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:12.087 ===================================================== 00:17:12.087 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:12.087 ===================================================== 00:17:12.087 Controller Capabilities/Features 00:17:12.087 ================================ 00:17:12.087 Vendor ID: 0000 00:17:12.087 Subsystem Vendor ID: 0000 00:17:12.087 Serial Number: 5f9d52ff104c5f1469d1 00:17:12.087 Model Number: Linux 00:17:12.087 Firmware Version: 6.7.0-68 00:17:12.087 Recommended Arb Burst: 0 00:17:12.087 IEEE OUI Identifier: 00 00 00 00:17:12.087 Multi-path I/O 00:17:12.087 May have multiple subsystem ports: No 00:17:12.087 May have multiple controllers: No 00:17:12.087 Associated with SR-IOV VF: No 00:17:12.087 Max Data Transfer Size: Unlimited 00:17:12.087 Max Number of Namespaces: 0 00:17:12.087 Max Number of I/O Queues: 1024 00:17:12.087 NVMe Specification Version (VS): 1.3 00:17:12.087 NVMe Specification Version (Identify): 1.3 00:17:12.087 Maximum Queue Entries: 1024 00:17:12.087 Contiguous Queues Required: No 00:17:12.087 Arbitration Mechanisms Supported 00:17:12.087 Weighted Round Robin: Not Supported 00:17:12.087 Vendor Specific: Not Supported 00:17:12.087 Reset Timeout: 7500 ms 00:17:12.087 Doorbell Stride: 4 bytes 00:17:12.087 NVM Subsystem Reset: Not Supported 00:17:12.087 Command Sets Supported 00:17:12.087 NVM Command Set: Supported 00:17:12.087 Boot Partition: Not Supported 00:17:12.087 Memory Page Size Minimum: 4096 bytes 00:17:12.087 Memory Page Size Maximum: 4096 bytes 00:17:12.087 Persistent Memory Region: Not Supported 00:17:12.087 Optional Asynchronous Events Supported 00:17:12.087 Namespace Attribute Notices: Not Supported 00:17:12.087 Firmware Activation Notices: Not Supported 00:17:12.087 ANA Change Notices: Not Supported 00:17:12.087 PLE Aggregate Log Change Notices: Not Supported 00:17:12.087 LBA Status Info Alert Notices: Not Supported 00:17:12.087 EGE Aggregate Log Change Notices: Not Supported 00:17:12.087 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.087 Zone Descriptor Change Notices: Not Supported 00:17:12.087 Discovery Log Change Notices: Supported 00:17:12.087 Controller Attributes 00:17:12.087 128-bit Host Identifier: Not Supported 00:17:12.087 Non-Operational Permissive Mode: Not Supported 00:17:12.087 NVM Sets: Not Supported 00:17:12.087 Read Recovery Levels: Not Supported 00:17:12.087 Endurance Groups: Not Supported 00:17:12.087 Predictable Latency Mode: Not Supported 00:17:12.087 Traffic Based Keep ALive: Not Supported 00:17:12.087 Namespace Granularity: Not Supported 00:17:12.087 SQ Associations: Not Supported 00:17:12.087 UUID List: Not Supported 00:17:12.087 Multi-Domain Subsystem: Not Supported 00:17:12.087 Fixed Capacity Management: Not Supported 00:17:12.087 Variable Capacity Management: Not Supported 00:17:12.087 Delete Endurance Group: Not Supported 00:17:12.087 Delete NVM Set: Not Supported 00:17:12.087 Extended LBA Formats Supported: Not Supported 00:17:12.087 Flexible Data Placement Supported: Not Supported 00:17:12.087 00:17:12.087 Controller Memory Buffer Support 00:17:12.087 ================================ 00:17:12.087 Supported: No 00:17:12.087 00:17:12.087 Persistent Memory Region Support 00:17:12.087 ================================ 00:17:12.087 Supported: No 00:17:12.087 00:17:12.087 Admin Command Set Attributes 00:17:12.087 ============================ 00:17:12.087 Security Send/Receive: Not Supported 00:17:12.087 Format NVM: Not Supported 00:17:12.087 Firmware Activate/Download: Not Supported 00:17:12.087 Namespace Management: Not Supported 00:17:12.087 Device Self-Test: Not Supported 00:17:12.087 Directives: Not Supported 00:17:12.087 NVMe-MI: Not Supported 00:17:12.087 Virtualization Management: Not Supported 00:17:12.087 Doorbell Buffer Config: Not Supported 00:17:12.087 Get LBA Status Capability: Not Supported 00:17:12.087 Command & Feature Lockdown Capability: Not Supported 00:17:12.087 Abort Command Limit: 1 00:17:12.087 Async Event Request Limit: 1 00:17:12.087 Number of Firmware Slots: N/A 00:17:12.087 Firmware Slot 1 Read-Only: N/A 00:17:12.087 Firmware Activation Without Reset: N/A 00:17:12.087 Multiple Update Detection Support: N/A 00:17:12.087 Firmware Update Granularity: No Information Provided 00:17:12.087 Per-Namespace SMART Log: No 00:17:12.087 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.087 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:12.087 Command Effects Log Page: Not Supported 00:17:12.087 Get Log Page Extended Data: Supported 00:17:12.087 Telemetry Log Pages: Not Supported 00:17:12.087 Persistent Event Log Pages: Not Supported 00:17:12.087 Supported Log Pages Log Page: May Support 00:17:12.087 Commands Supported & Effects Log Page: Not Supported 00:17:12.087 Feature Identifiers & Effects Log Page:May Support 00:17:12.087 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.087 Data Area 4 for Telemetry Log: Not Supported 00:17:12.087 Error Log Page Entries Supported: 1 00:17:12.087 Keep Alive: Not Supported 00:17:12.087 00:17:12.087 NVM Command Set Attributes 00:17:12.087 ========================== 00:17:12.087 Submission Queue Entry Size 00:17:12.087 Max: 1 00:17:12.087 Min: 1 00:17:12.087 Completion Queue Entry Size 00:17:12.087 Max: 1 00:17:12.087 Min: 1 00:17:12.087 Number of Namespaces: 0 00:17:12.087 Compare Command: Not Supported 00:17:12.087 Write Uncorrectable Command: Not Supported 00:17:12.087 Dataset Management Command: Not Supported 00:17:12.087 Write Zeroes Command: Not Supported 00:17:12.087 Set Features Save Field: Not Supported 00:17:12.087 Reservations: Not Supported 00:17:12.087 Timestamp: Not Supported 00:17:12.087 Copy: Not Supported 00:17:12.087 Volatile Write Cache: Not Present 00:17:12.087 Atomic Write Unit (Normal): 1 00:17:12.087 Atomic Write Unit (PFail): 1 00:17:12.087 Atomic Compare & Write Unit: 1 00:17:12.087 Fused Compare & Write: Not Supported 00:17:12.087 Scatter-Gather List 00:17:12.087 SGL Command Set: Supported 00:17:12.087 SGL Keyed: Not Supported 00:17:12.087 SGL Bit Bucket Descriptor: Not Supported 00:17:12.087 SGL Metadata Pointer: Not Supported 00:17:12.087 Oversized SGL: Not Supported 00:17:12.087 SGL Metadata Address: Not Supported 00:17:12.087 SGL Offset: Supported 00:17:12.087 Transport SGL Data Block: Not Supported 00:17:12.087 Replay Protected Memory Block: Not Supported 00:17:12.087 00:17:12.087 Firmware Slot Information 00:17:12.087 ========================= 00:17:12.087 Active slot: 0 00:17:12.087 00:17:12.087 00:17:12.087 Error Log 00:17:12.087 ========= 00:17:12.087 00:17:12.087 Active Namespaces 00:17:12.087 ================= 00:17:12.087 Discovery Log Page 00:17:12.087 ================== 00:17:12.087 Generation Counter: 2 00:17:12.087 Number of Records: 2 00:17:12.087 Record Format: 0 00:17:12.087 00:17:12.087 Discovery Log Entry 0 00:17:12.087 ---------------------- 00:17:12.087 Transport Type: 3 (TCP) 00:17:12.087 Address Family: 1 (IPv4) 00:17:12.087 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:12.087 Entry Flags: 00:17:12.087 Duplicate Returned Information: 0 00:17:12.087 Explicit Persistent Connection Support for Discovery: 0 00:17:12.087 Transport Requirements: 00:17:12.087 Secure Channel: Not Specified 00:17:12.088 Port ID: 1 (0x0001) 00:17:12.088 Controller ID: 65535 (0xffff) 00:17:12.088 Admin Max SQ Size: 32 00:17:12.088 Transport Service Identifier: 4420 00:17:12.088 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:12.088 Transport Address: 10.0.0.1 00:17:12.088 Discovery Log Entry 1 00:17:12.088 ---------------------- 00:17:12.088 Transport Type: 3 (TCP) 00:17:12.088 Address Family: 1 (IPv4) 00:17:12.088 Subsystem Type: 2 (NVM Subsystem) 00:17:12.088 Entry Flags: 00:17:12.088 Duplicate Returned Information: 0 00:17:12.088 Explicit Persistent Connection Support for Discovery: 0 00:17:12.088 Transport Requirements: 00:17:12.088 Secure Channel: Not Specified 00:17:12.088 Port ID: 1 (0x0001) 00:17:12.088 Controller ID: 65535 (0xffff) 00:17:12.088 Admin Max SQ Size: 32 00:17:12.088 Transport Service Identifier: 4420 00:17:12.088 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:12.088 Transport Address: 10.0.0.1 00:17:12.088 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:12.347 get_feature(0x01) failed 00:17:12.347 get_feature(0x02) failed 00:17:12.347 get_feature(0x04) failed 00:17:12.348 ===================================================== 00:17:12.348 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:12.348 ===================================================== 00:17:12.348 Controller Capabilities/Features 00:17:12.348 ================================ 00:17:12.348 Vendor ID: 0000 00:17:12.348 Subsystem Vendor ID: 0000 00:17:12.348 Serial Number: 69510154cafcd8f8da55 00:17:12.348 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:12.348 Firmware Version: 6.7.0-68 00:17:12.348 Recommended Arb Burst: 6 00:17:12.348 IEEE OUI Identifier: 00 00 00 00:17:12.348 Multi-path I/O 00:17:12.348 May have multiple subsystem ports: Yes 00:17:12.348 May have multiple controllers: Yes 00:17:12.348 Associated with SR-IOV VF: No 00:17:12.348 Max Data Transfer Size: Unlimited 00:17:12.348 Max Number of Namespaces: 1024 00:17:12.348 Max Number of I/O Queues: 128 00:17:12.348 NVMe Specification Version (VS): 1.3 00:17:12.348 NVMe Specification Version (Identify): 1.3 00:17:12.348 Maximum Queue Entries: 1024 00:17:12.348 Contiguous Queues Required: No 00:17:12.348 Arbitration Mechanisms Supported 00:17:12.348 Weighted Round Robin: Not Supported 00:17:12.348 Vendor Specific: Not Supported 00:17:12.348 Reset Timeout: 7500 ms 00:17:12.348 Doorbell Stride: 4 bytes 00:17:12.348 NVM Subsystem Reset: Not Supported 00:17:12.348 Command Sets Supported 00:17:12.348 NVM Command Set: Supported 00:17:12.348 Boot Partition: Not Supported 00:17:12.348 Memory Page Size Minimum: 4096 bytes 00:17:12.348 Memory Page Size Maximum: 4096 bytes 00:17:12.348 Persistent Memory Region: Not Supported 00:17:12.348 Optional Asynchronous Events Supported 00:17:12.348 Namespace Attribute Notices: Supported 00:17:12.348 Firmware Activation Notices: Not Supported 00:17:12.348 ANA Change Notices: Supported 00:17:12.348 PLE Aggregate Log Change Notices: Not Supported 00:17:12.348 LBA Status Info Alert Notices: Not Supported 00:17:12.348 EGE Aggregate Log Change Notices: Not Supported 00:17:12.348 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.348 Zone Descriptor Change Notices: Not Supported 00:17:12.348 Discovery Log Change Notices: Not Supported 00:17:12.348 Controller Attributes 00:17:12.348 128-bit Host Identifier: Supported 00:17:12.348 Non-Operational Permissive Mode: Not Supported 00:17:12.348 NVM Sets: Not Supported 00:17:12.348 Read Recovery Levels: Not Supported 00:17:12.348 Endurance Groups: Not Supported 00:17:12.348 Predictable Latency Mode: Not Supported 00:17:12.348 Traffic Based Keep ALive: Supported 00:17:12.348 Namespace Granularity: Not Supported 00:17:12.348 SQ Associations: Not Supported 00:17:12.348 UUID List: Not Supported 00:17:12.348 Multi-Domain Subsystem: Not Supported 00:17:12.348 Fixed Capacity Management: Not Supported 00:17:12.348 Variable Capacity Management: Not Supported 00:17:12.348 Delete Endurance Group: Not Supported 00:17:12.348 Delete NVM Set: Not Supported 00:17:12.348 Extended LBA Formats Supported: Not Supported 00:17:12.348 Flexible Data Placement Supported: Not Supported 00:17:12.348 00:17:12.348 Controller Memory Buffer Support 00:17:12.348 ================================ 00:17:12.348 Supported: No 00:17:12.348 00:17:12.348 Persistent Memory Region Support 00:17:12.348 ================================ 00:17:12.348 Supported: No 00:17:12.348 00:17:12.348 Admin Command Set Attributes 00:17:12.348 ============================ 00:17:12.348 Security Send/Receive: Not Supported 00:17:12.348 Format NVM: Not Supported 00:17:12.348 Firmware Activate/Download: Not Supported 00:17:12.348 Namespace Management: Not Supported 00:17:12.348 Device Self-Test: Not Supported 00:17:12.348 Directives: Not Supported 00:17:12.348 NVMe-MI: Not Supported 00:17:12.348 Virtualization Management: Not Supported 00:17:12.348 Doorbell Buffer Config: Not Supported 00:17:12.348 Get LBA Status Capability: Not Supported 00:17:12.348 Command & Feature Lockdown Capability: Not Supported 00:17:12.348 Abort Command Limit: 4 00:17:12.348 Async Event Request Limit: 4 00:17:12.348 Number of Firmware Slots: N/A 00:17:12.348 Firmware Slot 1 Read-Only: N/A 00:17:12.348 Firmware Activation Without Reset: N/A 00:17:12.348 Multiple Update Detection Support: N/A 00:17:12.348 Firmware Update Granularity: No Information Provided 00:17:12.348 Per-Namespace SMART Log: Yes 00:17:12.348 Asymmetric Namespace Access Log Page: Supported 00:17:12.348 ANA Transition Time : 10 sec 00:17:12.348 00:17:12.348 Asymmetric Namespace Access Capabilities 00:17:12.348 ANA Optimized State : Supported 00:17:12.348 ANA Non-Optimized State : Supported 00:17:12.348 ANA Inaccessible State : Supported 00:17:12.348 ANA Persistent Loss State : Supported 00:17:12.348 ANA Change State : Supported 00:17:12.348 ANAGRPID is not changed : No 00:17:12.348 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:12.348 00:17:12.348 ANA Group Identifier Maximum : 128 00:17:12.348 Number of ANA Group Identifiers : 128 00:17:12.348 Max Number of Allowed Namespaces : 1024 00:17:12.348 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:12.348 Command Effects Log Page: Supported 00:17:12.348 Get Log Page Extended Data: Supported 00:17:12.348 Telemetry Log Pages: Not Supported 00:17:12.348 Persistent Event Log Pages: Not Supported 00:17:12.348 Supported Log Pages Log Page: May Support 00:17:12.348 Commands Supported & Effects Log Page: Not Supported 00:17:12.348 Feature Identifiers & Effects Log Page:May Support 00:17:12.348 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.348 Data Area 4 for Telemetry Log: Not Supported 00:17:12.348 Error Log Page Entries Supported: 128 00:17:12.348 Keep Alive: Supported 00:17:12.348 Keep Alive Granularity: 1000 ms 00:17:12.348 00:17:12.348 NVM Command Set Attributes 00:17:12.348 ========================== 00:17:12.348 Submission Queue Entry Size 00:17:12.348 Max: 64 00:17:12.348 Min: 64 00:17:12.348 Completion Queue Entry Size 00:17:12.348 Max: 16 00:17:12.348 Min: 16 00:17:12.348 Number of Namespaces: 1024 00:17:12.348 Compare Command: Not Supported 00:17:12.348 Write Uncorrectable Command: Not Supported 00:17:12.348 Dataset Management Command: Supported 00:17:12.348 Write Zeroes Command: Supported 00:17:12.348 Set Features Save Field: Not Supported 00:17:12.348 Reservations: Not Supported 00:17:12.348 Timestamp: Not Supported 00:17:12.348 Copy: Not Supported 00:17:12.348 Volatile Write Cache: Present 00:17:12.348 Atomic Write Unit (Normal): 1 00:17:12.348 Atomic Write Unit (PFail): 1 00:17:12.348 Atomic Compare & Write Unit: 1 00:17:12.348 Fused Compare & Write: Not Supported 00:17:12.348 Scatter-Gather List 00:17:12.348 SGL Command Set: Supported 00:17:12.348 SGL Keyed: Not Supported 00:17:12.348 SGL Bit Bucket Descriptor: Not Supported 00:17:12.348 SGL Metadata Pointer: Not Supported 00:17:12.348 Oversized SGL: Not Supported 00:17:12.348 SGL Metadata Address: Not Supported 00:17:12.348 SGL Offset: Supported 00:17:12.348 Transport SGL Data Block: Not Supported 00:17:12.348 Replay Protected Memory Block: Not Supported 00:17:12.348 00:17:12.348 Firmware Slot Information 00:17:12.348 ========================= 00:17:12.348 Active slot: 0 00:17:12.348 00:17:12.348 Asymmetric Namespace Access 00:17:12.348 =========================== 00:17:12.348 Change Count : 0 00:17:12.348 Number of ANA Group Descriptors : 1 00:17:12.348 ANA Group Descriptor : 0 00:17:12.348 ANA Group ID : 1 00:17:12.348 Number of NSID Values : 1 00:17:12.348 Change Count : 0 00:17:12.348 ANA State : 1 00:17:12.348 Namespace Identifier : 1 00:17:12.348 00:17:12.348 Commands Supported and Effects 00:17:12.348 ============================== 00:17:12.348 Admin Commands 00:17:12.348 -------------- 00:17:12.348 Get Log Page (02h): Supported 00:17:12.348 Identify (06h): Supported 00:17:12.348 Abort (08h): Supported 00:17:12.348 Set Features (09h): Supported 00:17:12.348 Get Features (0Ah): Supported 00:17:12.348 Asynchronous Event Request (0Ch): Supported 00:17:12.348 Keep Alive (18h): Supported 00:17:12.348 I/O Commands 00:17:12.348 ------------ 00:17:12.348 Flush (00h): Supported 00:17:12.348 Write (01h): Supported LBA-Change 00:17:12.348 Read (02h): Supported 00:17:12.348 Write Zeroes (08h): Supported LBA-Change 00:17:12.348 Dataset Management (09h): Supported 00:17:12.348 00:17:12.348 Error Log 00:17:12.348 ========= 00:17:12.348 Entry: 0 00:17:12.348 Error Count: 0x3 00:17:12.348 Submission Queue Id: 0x0 00:17:12.349 Command Id: 0x5 00:17:12.349 Phase Bit: 0 00:17:12.349 Status Code: 0x2 00:17:12.349 Status Code Type: 0x0 00:17:12.349 Do Not Retry: 1 00:17:12.349 Error Location: 0x28 00:17:12.349 LBA: 0x0 00:17:12.349 Namespace: 0x0 00:17:12.349 Vendor Log Page: 0x0 00:17:12.349 ----------- 00:17:12.349 Entry: 1 00:17:12.349 Error Count: 0x2 00:17:12.349 Submission Queue Id: 0x0 00:17:12.349 Command Id: 0x5 00:17:12.349 Phase Bit: 0 00:17:12.349 Status Code: 0x2 00:17:12.349 Status Code Type: 0x0 00:17:12.349 Do Not Retry: 1 00:17:12.349 Error Location: 0x28 00:17:12.349 LBA: 0x0 00:17:12.349 Namespace: 0x0 00:17:12.349 Vendor Log Page: 0x0 00:17:12.349 ----------- 00:17:12.349 Entry: 2 00:17:12.349 Error Count: 0x1 00:17:12.349 Submission Queue Id: 0x0 00:17:12.349 Command Id: 0x4 00:17:12.349 Phase Bit: 0 00:17:12.349 Status Code: 0x2 00:17:12.349 Status Code Type: 0x0 00:17:12.349 Do Not Retry: 1 00:17:12.349 Error Location: 0x28 00:17:12.349 LBA: 0x0 00:17:12.349 Namespace: 0x0 00:17:12.349 Vendor Log Page: 0x0 00:17:12.349 00:17:12.349 Number of Queues 00:17:12.349 ================ 00:17:12.349 Number of I/O Submission Queues: 128 00:17:12.349 Number of I/O Completion Queues: 128 00:17:12.349 00:17:12.349 ZNS Specific Controller Data 00:17:12.349 ============================ 00:17:12.349 Zone Append Size Limit: 0 00:17:12.349 00:17:12.349 00:17:12.349 Active Namespaces 00:17:12.349 ================= 00:17:12.349 get_feature(0x05) failed 00:17:12.349 Namespace ID:1 00:17:12.349 Command Set Identifier: NVM (00h) 00:17:12.349 Deallocate: Supported 00:17:12.349 Deallocated/Unwritten Error: Not Supported 00:17:12.349 Deallocated Read Value: Unknown 00:17:12.349 Deallocate in Write Zeroes: Not Supported 00:17:12.349 Deallocated Guard Field: 0xFFFF 00:17:12.349 Flush: Supported 00:17:12.349 Reservation: Not Supported 00:17:12.349 Namespace Sharing Capabilities: Multiple Controllers 00:17:12.349 Size (in LBAs): 1310720 (5GiB) 00:17:12.349 Capacity (in LBAs): 1310720 (5GiB) 00:17:12.349 Utilization (in LBAs): 1310720 (5GiB) 00:17:12.349 UUID: ba19858c-b000-4837-a860-4854581b978d 00:17:12.349 Thin Provisioning: Not Supported 00:17:12.349 Per-NS Atomic Units: Yes 00:17:12.349 Atomic Boundary Size (Normal): 0 00:17:12.349 Atomic Boundary Size (PFail): 0 00:17:12.349 Atomic Boundary Offset: 0 00:17:12.349 NGUID/EUI64 Never Reused: No 00:17:12.349 ANA group ID: 1 00:17:12.349 Namespace Write Protected: No 00:17:12.349 Number of LBA Formats: 1 00:17:12.349 Current LBA Format: LBA Format #00 00:17:12.349 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:12.349 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.349 rmmod nvme_tcp 00:17:12.349 rmmod nvme_fabrics 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.349 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:12.607 14:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:13.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:13.426 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:13.426 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:13.426 ************************************ 00:17:13.426 END TEST nvmf_identify_kernel_target 00:17:13.426 ************************************ 00:17:13.426 00:17:13.426 real 0m2.950s 00:17:13.426 user 0m1.003s 00:17:13.426 sys 0m1.405s 00:17:13.426 14:00:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.426 14:00:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.426 14:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:13.426 14:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:13.426 14:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.426 14:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.426 ************************************ 00:17:13.426 START TEST nvmf_auth_host 00:17:13.426 ************************************ 00:17:13.426 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:13.684 * Looking for test storage... 00:17:13.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.684 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:13.685 Cannot find device "nvmf_tgt_br" 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.685 Cannot find device "nvmf_tgt_br2" 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:13.685 Cannot find device "nvmf_tgt_br" 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:13.685 Cannot find device "nvmf_tgt_br2" 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.685 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:13.686 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.686 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.686 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.686 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:13.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:17:13.944 00:17:13.944 --- 10.0.0.2 ping statistics --- 00:17:13.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.944 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:13.944 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.944 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:17:13.944 00:17:13.944 --- 10.0.0.3 ping statistics --- 00:17:13.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.944 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:17:13.944 00:17:13.944 --- 10.0.0.1 ping statistics --- 00:17:13.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.944 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77910 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77910 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77910 ']' 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.944 14:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.331 14:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.331 14:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:15.331 14:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.331 14:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:15.331 14:00:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=949a805ed007229bed7c3a8879246073 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.G6b 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 949a805ed007229bed7c3a8879246073 0 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 949a805ed007229bed7c3a8879246073 0 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=949a805ed007229bed7c3a8879246073 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.G6b 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.G6b 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.G6b 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ce1771b0640fbdc9c6bd51c8d1023aa05f65c12e639ebee6ef51d5db9f53858a 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4bB 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ce1771b0640fbdc9c6bd51c8d1023aa05f65c12e639ebee6ef51d5db9f53858a 3 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ce1771b0640fbdc9c6bd51c8d1023aa05f65c12e639ebee6ef51d5db9f53858a 3 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ce1771b0640fbdc9c6bd51c8d1023aa05f65c12e639ebee6ef51d5db9f53858a 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4bB 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4bB 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4bB 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0a03952a04442d3219d98273734dc82a8445d3b1cad929ca 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PXr 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0a03952a04442d3219d98273734dc82a8445d3b1cad929ca 0 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0a03952a04442d3219d98273734dc82a8445d3b1cad929ca 0 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0a03952a04442d3219d98273734dc82a8445d3b1cad929ca 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PXr 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PXr 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.PXr 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9a2411a187c9ac7f5d5d7128aa7fa393d6b5a7ac8714d8f9 00:17:15.331 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.92k 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9a2411a187c9ac7f5d5d7128aa7fa393d6b5a7ac8714d8f9 2 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9a2411a187c9ac7f5d5d7128aa7fa393d6b5a7ac8714d8f9 2 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9a2411a187c9ac7f5d5d7128aa7fa393d6b5a7ac8714d8f9 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.92k 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.92k 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.92k 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6edce11b181fcc0cea5d489e5613e7f9 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qUQ 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6edce11b181fcc0cea5d489e5613e7f9 1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6edce11b181fcc0cea5d489e5613e7f9 1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6edce11b181fcc0cea5d489e5613e7f9 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qUQ 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qUQ 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.qUQ 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=586e884e1b3c8888cb21753b7d8338e1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PPq 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 586e884e1b3c8888cb21753b7d8338e1 1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 586e884e1b3c8888cb21753b7d8338e1 1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=586e884e1b3c8888cb21753b7d8338e1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:15.332 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PPq 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PPq 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.PPq 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07832cb48ba32516a8d4cd7c261a65ece66d0e91efe0a5fa 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qhg 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07832cb48ba32516a8d4cd7c261a65ece66d0e91efe0a5fa 2 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07832cb48ba32516a8d4cd7c261a65ece66d0e91efe0a5fa 2 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07832cb48ba32516a8d4cd7c261a65ece66d0e91efe0a5fa 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:15.590 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qhg 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qhg 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.qhg 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e84ca9d0724940091a6b5e63d3c9693 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.saV 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e84ca9d0724940091a6b5e63d3c9693 0 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e84ca9d0724940091a6b5e63d3c9693 0 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e84ca9d0724940091a6b5e63d3c9693 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.saV 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.saV 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.saV 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=51c7297f34c0aae322591d94834a4241656a1453ab59f2dd0a65f0e7ba898e09 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pp3 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 51c7297f34c0aae322591d94834a4241656a1453ab59f2dd0a65f0e7ba898e09 3 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 51c7297f34c0aae322591d94834a4241656a1453ab59f2dd0a65f0e7ba898e09 3 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=51c7297f34c0aae322591d94834a4241656a1453ab59f2dd0a65f0e7ba898e09 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pp3 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pp3 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Pp3 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77910 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77910 ']' 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.591 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.850 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.850 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.850 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.G6b 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4bB ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4bB 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.PXr 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.92k ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.92k 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qUQ 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.PPq ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PPq 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qhg 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.saV ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.saV 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Pp3 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:16.109 14:00:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:16.109 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:16.109 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:16.368 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:16.368 Waiting for block devices as requested 00:17:16.368 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:16.626 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:17.193 14:00:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:17.193 No valid GPT data, bailing 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:17.193 No valid GPT data, bailing 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:17.193 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:17.194 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:17.194 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:17.194 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:17.194 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:17.194 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:17.194 No valid GPT data, bailing 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:17.452 No valid GPT data, bailing 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:17.452 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid=71427938-e211-49fa-b6ad-486cdab0bd89 -a 10.0.0.1 -t tcp -s 4420 00:17:17.453 00:17:17.453 Discovery Log Number of Records 2, Generation counter 2 00:17:17.453 =====Discovery Log Entry 0====== 00:17:17.453 trtype: tcp 00:17:17.453 adrfam: ipv4 00:17:17.453 subtype: current discovery subsystem 00:17:17.453 treq: not specified, sq flow control disable supported 00:17:17.453 portid: 1 00:17:17.453 trsvcid: 4420 00:17:17.453 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:17.453 traddr: 10.0.0.1 00:17:17.453 eflags: none 00:17:17.453 sectype: none 00:17:17.453 =====Discovery Log Entry 1====== 00:17:17.453 trtype: tcp 00:17:17.453 adrfam: ipv4 00:17:17.453 subtype: nvme subsystem 00:17:17.453 treq: not specified, sq flow control disable supported 00:17:17.453 portid: 1 00:17:17.453 trsvcid: 4420 00:17:17.453 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:17.453 traddr: 10.0.0.1 00:17:17.453 eflags: none 00:17:17.453 sectype: none 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.453 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 nvme0n1 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.711 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 nvme0n1 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.970 14:00:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 nvme0n1 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 nvme0n1 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.229 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.489 nvme0n1 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.489 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.748 nvme0n1 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.748 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.006 14:00:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.264 nvme0n1 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.264 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.525 nvme0n1 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.525 nvme0n1 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.525 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.784 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.785 nvme0n1 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.785 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.044 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.045 nvme0n1 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.045 14:00:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.045 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.979 nvme0n1 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.979 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.980 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.980 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.980 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.980 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.980 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.980 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.980 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.980 14:00:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.243 nvme0n1 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.243 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.501 nvme0n1 00:17:21.501 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.502 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.760 nvme0n1 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.760 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.018 nvme0n1 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.018 14:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.018 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.018 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.018 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.018 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:22.277 14:00:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.175 14:00:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.433 nvme0n1 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.433 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.691 nvme0n1 00:17:24.691 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.691 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.691 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.691 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.691 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.691 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.947 14:00:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.205 nvme0n1 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.205 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.771 nvme0n1 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.771 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.772 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.772 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.772 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:25.772 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.772 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.029 nvme0n1 00:17:26.029 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.029 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.029 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.029 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.029 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.029 14:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.030 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.287 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.853 nvme0n1 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.853 14:00:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.809 nvme0n1 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.809 14:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.374 nvme0n1 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.375 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 nvme0n1 00:17:28.940 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.940 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.940 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.940 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.940 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.196 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.196 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.196 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.196 14:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:29.196 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.197 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.762 nvme0n1 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:29.762 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.763 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.020 nvme0n1 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:30.020 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.021 14:00:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.279 nvme0n1 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:30.279 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 nvme0n1 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.280 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.539 nvme0n1 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.539 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.797 nvme0n1 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.797 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 nvme0n1 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.056 nvme0n1 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.056 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.057 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.057 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.057 14:00:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.057 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.315 nvme0n1 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.315 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.316 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.316 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.316 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.316 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.316 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.316 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:31.316 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.316 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.573 nvme0n1 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.574 nvme0n1 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.574 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.832 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.833 nvme0n1 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.833 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:32.090 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.091 14:00:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.349 nvme0n1 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.349 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.606 nvme0n1 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.606 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.864 nvme0n1 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.864 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.122 nvme0n1 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.122 14:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.122 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.379 nvme0n1 00:17:33.379 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.379 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.379 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.379 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.379 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.379 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.637 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.896 nvme0n1 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.896 14:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.462 nvme0n1 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.462 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.719 nvme0n1 00:17:34.720 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.720 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.720 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.720 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.720 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.720 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.977 14:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.293 nvme0n1 00:17:35.293 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.293 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.293 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.293 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.293 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.293 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.294 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.858 nvme0n1 00:17:35.858 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.859 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.859 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.859 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.859 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.117 14:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.683 nvme0n1 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:36.683 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.684 14:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.617 nvme0n1 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.203 nvme0n1 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.203 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.768 nvme0n1 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.768 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.026 nvme0n1 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.026 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.027 14:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.285 nvme0n1 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:39.285 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.286 nvme0n1 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.286 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.543 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.544 nvme0n1 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.544 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.804 nvme0n1 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:39.804 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.805 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.063 nvme0n1 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.063 14:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.321 nvme0n1 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.321 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.322 nvme0n1 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.322 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.580 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.580 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.580 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.580 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.580 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.581 nvme0n1 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.581 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.840 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.841 nvme0n1 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.841 14:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.100 nvme0n1 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.100 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.358 nvme0n1 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.358 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.623 nvme0n1 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.623 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.881 nvme0n1 00:17:41.881 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.139 14:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.139 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.398 nvme0n1 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:42.398 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.399 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.657 nvme0n1 00:17:42.657 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.657 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.657 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.657 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.657 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.915 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.916 14:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.174 nvme0n1 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.174 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 nvme0n1 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.740 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.997 nvme0n1 00:17:43.997 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.997 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.997 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.997 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.997 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.997 14:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.997 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.998 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.998 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.998 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.255 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.512 nvme0n1 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ5YTgwNWVkMDA3MjI5YmVkN2MzYTg4NzkyNDYwNzMDNBo3: 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: ]] 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2UxNzcxYjA2NDBmYmRjOWM2YmQ1MWM4ZDEwMjNhYTA1ZjY1YzEyZTYzOWViZWU2ZWY1MWQ1ZGI5ZjUzODU4YZB/y1k=: 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.512 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.513 14:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.446 nvme0n1 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.446 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.010 nvme0n1 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.010 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:46.011 14:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVkY2UxMWIxODFmY2MwY2VhNWQ0ODllNTYxM2U3ZjleIPNF: 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: ]] 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg2ZTg4NGUxYjNjODg4OGNiMjE3NTNiN2Q4MzM4ZTFKzMIh: 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.011 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.945 nvme0n1 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc4MzJjYjQ4YmEzMjUxNmE4ZDRjZDdjMjYxYTY1ZWNlNjZkMGU5MWVmZTBhNWZhxP5fqA==: 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU4NGNhOWQwNzI0OTQwMDkxYTZiNWU2M2QzYzk2OTNk7UQn: 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.945 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.946 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:46.946 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.946 14:00:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.512 nvme0n1 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTFjNzI5N2YzNGMwYWFlMzIyNTkxZDk0ODM0YTQyNDE2NTZhMTQ1M2FiNTlmMmRkMGE2NWYwZTdiYTg5OGUwOZayqhU=: 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:47.512 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.513 14:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.078 nvme0n1 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGEwMzk1MmEwNDQ0MmQzMjE5ZDk4MjczNzM0ZGM4MmE4NDQ1ZDNiMWNhZDkyOWNhFXWrsA==: 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: ]] 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWEyNDExYTE4N2M5YWM3ZjVkNWQ3MTI4YWE3ZmEzOTNkNmI1YTdhYzg3MTRkOGY58PDx7w==: 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.078 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.079 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.336 request: 00:17:48.336 { 00:17:48.336 "name": "nvme0", 00:17:48.336 "trtype": "tcp", 00:17:48.336 "traddr": "10.0.0.1", 00:17:48.336 "adrfam": "ipv4", 00:17:48.336 "trsvcid": "4420", 00:17:48.336 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:48.336 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:48.336 "prchk_reftag": false, 00:17:48.336 "prchk_guard": false, 00:17:48.336 "hdgst": false, 00:17:48.336 "ddgst": false, 00:17:48.336 "method": "bdev_nvme_attach_controller", 00:17:48.336 "req_id": 1 00:17:48.336 } 00:17:48.336 Got JSON-RPC error response 00:17:48.336 response: 00:17:48.336 { 00:17:48.336 "code": -5, 00:17:48.336 "message": "Input/output error" 00:17:48.336 } 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.336 request: 00:17:48.336 { 00:17:48.336 "name": "nvme0", 00:17:48.336 "trtype": "tcp", 00:17:48.336 "traddr": "10.0.0.1", 00:17:48.336 "adrfam": "ipv4", 00:17:48.336 "trsvcid": "4420", 00:17:48.336 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:48.336 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:48.336 "prchk_reftag": false, 00:17:48.336 "prchk_guard": false, 00:17:48.336 "hdgst": false, 00:17:48.336 "ddgst": false, 00:17:48.336 "dhchap_key": "key2", 00:17:48.336 "method": "bdev_nvme_attach_controller", 00:17:48.336 "req_id": 1 00:17:48.336 } 00:17:48.336 Got JSON-RPC error response 00:17:48.336 response: 00:17:48.336 { 00:17:48.336 "code": -5, 00:17:48.336 "message": "Input/output error" 00:17:48.336 } 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.336 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.337 request: 00:17:48.337 { 00:17:48.337 "name": "nvme0", 00:17:48.337 "trtype": "tcp", 00:17:48.337 "traddr": "10.0.0.1", 00:17:48.337 "adrfam": "ipv4", 00:17:48.337 "trsvcid": "4420", 00:17:48.337 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:48.337 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:48.337 "prchk_reftag": false, 00:17:48.337 "prchk_guard": false, 00:17:48.337 "hdgst": false, 00:17:48.337 "ddgst": false, 00:17:48.337 "dhchap_key": "key1", 00:17:48.337 "dhchap_ctrlr_key": "ckey2", 00:17:48.337 "method": "bdev_nvme_attach_controller", 00:17:48.337 "req_id": 1 00:17:48.337 } 00:17:48.337 Got JSON-RPC error response 00:17:48.337 response: 00:17:48.337 { 00:17:48.337 "code": -5, 00:17:48.337 "message": "Input/output error" 00:17:48.337 } 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.337 rmmod nvme_tcp 00:17:48.337 rmmod nvme_fabrics 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77910 ']' 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77910 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 77910 ']' 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 77910 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.337 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77910 00:17:48.595 killing process with pid 77910 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77910' 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 77910 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 77910 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.595 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:48.852 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:48.852 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:48.852 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:48.852 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:48.852 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:48.852 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:48.852 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:48.853 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:48.853 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:48.853 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:48.853 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:48.853 14:00:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:49.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:49.419 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:49.677 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:49.677 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.G6b /tmp/spdk.key-null.PXr /tmp/spdk.key-sha256.qUQ /tmp/spdk.key-sha384.qhg /tmp/spdk.key-sha512.Pp3 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:49.677 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:49.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:49.947 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:49.947 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:49.947 00:17:49.947 real 0m36.536s 00:17:49.947 user 0m32.665s 00:17:49.947 sys 0m3.663s 00:17:49.947 ************************************ 00:17:49.947 END TEST nvmf_auth_host 00:17:49.947 ************************************ 00:17:49.947 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.947 14:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.209 14:00:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:50.209 14:00:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:50.209 14:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:50.209 14:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.209 14:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.209 ************************************ 00:17:50.209 START TEST nvmf_digest 00:17:50.209 ************************************ 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:50.209 * Looking for test storage... 00:17:50.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:50.209 Cannot find device "nvmf_tgt_br" 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.209 Cannot find device "nvmf_tgt_br2" 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:50.209 Cannot find device "nvmf_tgt_br" 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:50.209 Cannot find device "nvmf_tgt_br2" 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:50.209 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.467 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:50.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:17:50.467 00:17:50.467 --- 10.0.0.2 ping statistics --- 00:17:50.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.468 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:50.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:17:50.468 00:17:50.468 --- 10.0.0.3 ping statistics --- 00:17:50.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.468 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:17:50.468 00:17:50.468 --- 10.0.0.1 ping statistics --- 00:17:50.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.468 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:50.468 ************************************ 00:17:50.468 START TEST nvmf_digest_clean 00:17:50.468 ************************************ 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.468 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:50.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.726 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79494 00:17:50.726 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:50.726 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79494 00:17:50.726 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79494 ']' 00:17:50.726 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.726 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.726 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.727 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.727 14:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:50.727 [2024-07-25 14:00:39.570253] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:17:50.727 [2024-07-25 14:00:39.570593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.727 [2024-07-25 14:00:39.704929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.985 [2024-07-25 14:00:39.874869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.985 [2024-07-25 14:00:39.875361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.985 [2024-07-25 14:00:39.875588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.985 [2024-07-25 14:00:39.875747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.985 [2024-07-25 14:00:39.875765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.985 [2024-07-25 14:00:39.875833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.551 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:51.810 [2024-07-25 14:00:40.651622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:51.810 null0 00:17:51.810 [2024-07-25 14:00:40.725139] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.810 [2024-07-25 14:00:40.749457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79526 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79526 /var/tmp/bperf.sock 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79526 ']' 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:51.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.810 14:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:51.810 [2024-07-25 14:00:40.808181] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:17:51.810 [2024-07-25 14:00:40.808568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79526 ] 00:17:52.127 [2024-07-25 14:00:40.946680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.127 [2024-07-25 14:00:41.130460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.385 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.385 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:52.385 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:52.385 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:52.385 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:52.643 [2024-07-25 14:00:41.548517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:52.643 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.643 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.900 nvme0n1 00:17:53.158 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:53.158 14:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:53.158 Running I/O for 2 seconds... 00:17:55.056 00:17:55.056 Latency(us) 00:17:55.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.056 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:55.056 nvme0n1 : 2.01 14546.46 56.82 0.00 0.00 8793.02 7864.32 20018.27 00:17:55.056 =================================================================================================================== 00:17:55.056 Total : 14546.46 56.82 0.00 0.00 8793.02 7864.32 20018.27 00:17:55.056 0 00:17:55.056 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:55.056 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:55.056 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:55.056 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:55.056 | select(.opcode=="crc32c") 00:17:55.056 | "\(.module_name) \(.executed)"' 00:17:55.056 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:55.314 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:55.314 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:55.314 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:55.314 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:55.314 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79526 00:17:55.314 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79526 ']' 00:17:55.314 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79526 00:17:55.573 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:55.573 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.573 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79526 00:17:55.573 killing process with pid 79526 00:17:55.573 Received shutdown signal, test time was about 2.000000 seconds 00:17:55.573 00:17:55.573 Latency(us) 00:17:55.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.573 =================================================================================================================== 00:17:55.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.573 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:55.573 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:55.573 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79526' 00:17:55.573 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79526 00:17:55.573 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79526 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79579 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79579 /var/tmp/bperf.sock 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79579 ']' 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:55.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.831 14:00:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:55.831 [2024-07-25 14:00:44.738874] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:17:55.832 [2024-07-25 14:00:44.739260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79579 ] 00:17:55.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:55.832 Zero copy mechanism will not be used. 00:17:56.090 [2024-07-25 14:00:44.877395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.090 [2024-07-25 14:00:45.031469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.027 14:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.027 14:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:57.027 14:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:57.027 14:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:57.027 14:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:57.027 [2024-07-25 14:00:46.024339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:57.284 14:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:57.284 14:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:57.540 nvme0n1 00:17:57.540 14:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:57.540 14:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:57.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:57.540 Zero copy mechanism will not be used. 00:17:57.540 Running I/O for 2 seconds... 00:18:00.079 00:18:00.079 Latency(us) 00:18:00.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.079 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:00.079 nvme0n1 : 2.00 6549.55 818.69 0.00 0.00 2439.45 2293.76 4944.99 00:18:00.079 =================================================================================================================== 00:18:00.079 Total : 6549.55 818.69 0.00 0.00 2439.45 2293.76 4944.99 00:18:00.079 0 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:00.079 | select(.opcode=="crc32c") 00:18:00.079 | "\(.module_name) \(.executed)"' 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79579 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79579 ']' 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79579 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79579 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:00.079 killing process with pid 79579 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79579' 00:18:00.079 Received shutdown signal, test time was about 2.000000 seconds 00:18:00.079 00:18:00.079 Latency(us) 00:18:00.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.079 =================================================================================================================== 00:18:00.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79579 00:18:00.079 14:00:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79579 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79638 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79638 /var/tmp/bperf.sock 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79638 ']' 00:18:00.337 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:00.338 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:00.338 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:00.338 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:00.338 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.338 14:00:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:00.338 [2024-07-25 14:00:49.167632] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:00.338 [2024-07-25 14:00:49.167734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79638 ] 00:18:00.338 [2024-07-25 14:00:49.306851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.595 [2024-07-25 14:00:49.448003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.527 14:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.527 14:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:01.527 14:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:01.527 14:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:01.527 14:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:01.527 [2024-07-25 14:00:50.548373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:01.785 14:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:01.785 14:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.042 nvme0n1 00:18:02.042 14:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:02.042 14:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:02.300 Running I/O for 2 seconds... 00:18:04.246 00:18:04.247 Latency(us) 00:18:04.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.247 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.247 nvme0n1 : 2.00 15523.46 60.64 0.00 0.00 8237.04 7536.64 16681.89 00:18:04.247 =================================================================================================================== 00:18:04.247 Total : 15523.46 60.64 0.00 0.00 8237.04 7536.64 16681.89 00:18:04.247 0 00:18:04.247 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:04.247 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:04.247 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:04.247 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:04.247 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:04.247 | select(.opcode=="crc32c") 00:18:04.247 | "\(.module_name) \(.executed)"' 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79638 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79638 ']' 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79638 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79638 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79638' 00:18:04.503 killing process with pid 79638 00:18:04.503 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79638 00:18:04.503 Received shutdown signal, test time was about 2.000000 seconds 00:18:04.504 00:18:04.504 Latency(us) 00:18:04.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.504 =================================================================================================================== 00:18:04.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.504 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79638 00:18:05.067 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79700 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79700 /var/tmp/bperf.sock 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79700 ']' 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:05.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.068 14:00:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:05.068 [2024-07-25 14:00:53.856945] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:05.068 [2024-07-25 14:00:53.857067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79700 ] 00:18:05.068 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:05.068 Zero copy mechanism will not be used. 00:18:05.068 [2024-07-25 14:00:53.997994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.325 [2024-07-25 14:00:54.153685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.892 14:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.892 14:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:05.892 14:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:05.892 14:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:05.892 14:00:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:06.458 [2024-07-25 14:00:55.197333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:06.458 14:00:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.458 14:00:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.737 nvme0n1 00:18:06.737 14:00:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:06.737 14:00:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:06.737 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:06.737 Zero copy mechanism will not be used. 00:18:06.737 Running I/O for 2 seconds... 00:18:09.268 00:18:09.268 Latency(us) 00:18:09.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.268 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:09.268 nvme0n1 : 2.00 6063.94 757.99 0.00 0.00 2632.42 1966.08 9472.93 00:18:09.268 =================================================================================================================== 00:18:09.268 Total : 6063.94 757.99 0.00 0.00 2632.42 1966.08 9472.93 00:18:09.268 0 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:09.268 | select(.opcode=="crc32c") 00:18:09.268 | "\(.module_name) \(.executed)"' 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79700 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79700 ']' 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79700 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.268 14:00:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79700 00:18:09.268 killing process with pid 79700 00:18:09.268 Received shutdown signal, test time was about 2.000000 seconds 00:18:09.268 00:18:09.268 Latency(us) 00:18:09.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.268 =================================================================================================================== 00:18:09.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.268 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:09.268 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:09.268 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79700' 00:18:09.268 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79700 00:18:09.268 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79700 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79494 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79494 ']' 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79494 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79494 00:18:09.527 killing process with pid 79494 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79494' 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79494 00:18:09.527 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79494 00:18:09.786 00:18:09.786 real 0m19.090s 00:18:09.786 user 0m36.072s 00:18:09.786 sys 0m5.815s 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.786 ************************************ 00:18:09.786 END TEST nvmf_digest_clean 00:18:09.786 ************************************ 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:09.786 ************************************ 00:18:09.786 START TEST nvmf_digest_error 00:18:09.786 ************************************ 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79789 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79789 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79789 ']' 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.786 14:00:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.786 [2024-07-25 14:00:58.697473] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:09.786 [2024-07-25 14:00:58.697590] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.045 [2024-07-25 14:00:58.840906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.045 [2024-07-25 14:00:58.956859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.045 [2024-07-25 14:00:58.956916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.045 [2024-07-25 14:00:58.956927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.045 [2024-07-25 14:00:58.956936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.045 [2024-07-25 14:00:58.956943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.045 [2024-07-25 14:00:58.956971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.991 [2024-07-25 14:00:59.737474] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.991 [2024-07-25 14:00:59.802104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:10.991 null0 00:18:10.991 [2024-07-25 14:00:59.852662] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.991 [2024-07-25 14:00:59.876805] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79822 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79822 /var/tmp/bperf.sock 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79822 ']' 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.991 14:00:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.991 [2024-07-25 14:00:59.929312] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:10.991 [2024-07-25 14:00:59.929628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79822 ] 00:18:11.252 [2024-07-25 14:01:00.066161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.252 [2024-07-25 14:01:00.196012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.252 [2024-07-25 14:01:00.252070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:12.187 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.187 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:12.187 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:12.187 14:01:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:12.187 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:12.187 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.187 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:12.187 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.187 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:12.187 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:12.754 nvme0n1 00:18:12.754 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:12.754 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.754 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:12.754 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.754 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:12.754 14:01:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:12.754 Running I/O for 2 seconds... 00:18:12.754 [2024-07-25 14:01:01.688700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:12.754 [2024-07-25 14:01:01.688777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.754 [2024-07-25 14:01:01.688794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.754 [2024-07-25 14:01:01.706065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:12.754 [2024-07-25 14:01:01.706133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.754 [2024-07-25 14:01:01.706149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.754 [2024-07-25 14:01:01.723447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:12.754 [2024-07-25 14:01:01.723518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.754 [2024-07-25 14:01:01.723534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.754 [2024-07-25 14:01:01.740765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:12.754 [2024-07-25 14:01:01.740831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.754 [2024-07-25 14:01:01.740846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.754 [2024-07-25 14:01:01.757943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:12.754 [2024-07-25 14:01:01.758004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.754 [2024-07-25 14:01:01.758020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.754 [2024-07-25 14:01:01.775116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:12.754 [2024-07-25 14:01:01.775173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.754 [2024-07-25 14:01:01.775188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.013 [2024-07-25 14:01:01.792395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.013 [2024-07-25 14:01:01.792461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.013 [2024-07-25 14:01:01.792476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.013 [2024-07-25 14:01:01.809763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.013 [2024-07-25 14:01:01.809840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.013 [2024-07-25 14:01:01.809855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.013 [2024-07-25 14:01:01.826945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.013 [2024-07-25 14:01:01.827005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.013 [2024-07-25 14:01:01.827021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.013 [2024-07-25 14:01:01.844071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.013 [2024-07-25 14:01:01.844141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.013 [2024-07-25 14:01:01.844156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.013 [2024-07-25 14:01:01.861346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.013 [2024-07-25 14:01:01.861414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.013 [2024-07-25 14:01:01.861441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.013 [2024-07-25 14:01:01.878808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.013 [2024-07-25 14:01:01.878875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.013 [2024-07-25 14:01:01.878892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.013 [2024-07-25 14:01:01.896281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.013 [2024-07-25 14:01:01.896353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.013 [2024-07-25 14:01:01.896369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.013 [2024-07-25 14:01:01.913884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.013 [2024-07-25 14:01:01.913953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.013 [2024-07-25 14:01:01.913970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.014 [2024-07-25 14:01:01.932694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.014 [2024-07-25 14:01:01.932787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.014 [2024-07-25 14:01:01.932814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.014 [2024-07-25 14:01:01.950807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.014 [2024-07-25 14:01:01.950873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.014 [2024-07-25 14:01:01.950889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.014 [2024-07-25 14:01:01.968096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.014 [2024-07-25 14:01:01.968169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.014 [2024-07-25 14:01:01.968185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.014 [2024-07-25 14:01:01.985328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.014 [2024-07-25 14:01:01.985392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.014 [2024-07-25 14:01:01.985408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.014 [2024-07-25 14:01:02.002496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.014 [2024-07-25 14:01:02.002557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.014 [2024-07-25 14:01:02.002573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.014 [2024-07-25 14:01:02.019774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.014 [2024-07-25 14:01:02.019840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.014 [2024-07-25 14:01:02.019857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.014 [2024-07-25 14:01:02.037117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.014 [2024-07-25 14:01:02.037187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.014 [2024-07-25 14:01:02.037204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.054500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.054569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.054585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.071792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.071853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.071867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.089090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.089154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.089170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.106295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.106370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.106385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.123524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.123582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.123597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.140725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.140788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.140804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.157926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.157984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.158000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.175113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.175174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.175189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.192250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.192323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.192339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.209524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.209581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.209596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.226844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.226906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.226921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.243991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.244051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.244067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.261166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.261228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.261243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.278349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.278412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.278427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.274 [2024-07-25 14:01:02.295667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.274 [2024-07-25 14:01:02.295726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.274 [2024-07-25 14:01:02.295741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.533 [2024-07-25 14:01:02.312995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.533 [2024-07-25 14:01:02.313046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.533 [2024-07-25 14:01:02.313061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.533 [2024-07-25 14:01:02.330449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.533 [2024-07-25 14:01:02.330513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.533 [2024-07-25 14:01:02.330529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.533 [2024-07-25 14:01:02.347791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.533 [2024-07-25 14:01:02.347853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.533 [2024-07-25 14:01:02.347868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.533 [2024-07-25 14:01:02.365023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.533 [2024-07-25 14:01:02.365088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.533 [2024-07-25 14:01:02.365103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.533 [2024-07-25 14:01:02.382275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.533 [2024-07-25 14:01:02.382345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.533 [2024-07-25 14:01:02.382361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.533 [2024-07-25 14:01:02.399505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.533 [2024-07-25 14:01:02.399562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.533 [2024-07-25 14:01:02.399577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.533 [2024-07-25 14:01:02.416707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.416770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.416785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.534 [2024-07-25 14:01:02.435266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.435350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.435367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.534 [2024-07-25 14:01:02.452830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.452899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.452914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.534 [2024-07-25 14:01:02.470361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.470428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.470443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.534 [2024-07-25 14:01:02.487846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.487903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.487918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.534 [2024-07-25 14:01:02.505077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.505136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.505151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.534 [2024-07-25 14:01:02.522622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.522685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.522701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.534 [2024-07-25 14:01:02.540293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.540362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.540378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.534 [2024-07-25 14:01:02.558062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.534 [2024-07-25 14:01:02.558124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.534 [2024-07-25 14:01:02.558141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.575516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.575572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.575588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.592969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.593036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.593052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.610179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.610239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.610254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.627367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.627429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.627444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.644502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.644559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.644574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.661557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.661615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.661631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.678573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.678624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.678638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.695562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.695616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.695631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.712594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.712649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.712663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.729645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.729707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.729722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.747196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.747266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.747281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.764424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.764486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.764502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.789261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.789348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.789365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.792 [2024-07-25 14:01:02.806564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:13.792 [2024-07-25 14:01:02.806632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.792 [2024-07-25 14:01:02.806647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.823850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.823918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.823935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.841783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.841860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.841876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.859346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.859456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.859484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.876981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.877053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.877068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.894435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.894503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.894519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.911710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.911778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.911794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.929061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.929129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.929144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.946538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.946614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.946631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.963898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.963969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.963985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.981296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.981366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.981381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:02.998494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:02.998562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:02.998577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:03.015777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:03.015845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:03.015860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:03.033098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:03.033163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:03.033178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:03.050402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:03.050469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:03.050484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.051 [2024-07-25 14:01:03.067843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.051 [2024-07-25 14:01:03.067910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.051 [2024-07-25 14:01:03.067925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.085212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.085280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.085296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.102529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.102594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.102610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.119864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.119930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.119945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.137403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.137472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.137488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.154785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.154848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.154863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.172113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.172176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.172192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.189655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.189724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.189739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.207016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.207079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.207094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.224431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.224496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.224511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.242039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.242105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.242121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.259830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.259889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.310 [2024-07-25 14:01:03.259904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.310 [2024-07-25 14:01:03.277252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.310 [2024-07-25 14:01:03.277336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.311 [2024-07-25 14:01:03.277352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.311 [2024-07-25 14:01:03.294566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.311 [2024-07-25 14:01:03.294627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.311 [2024-07-25 14:01:03.294643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.311 [2024-07-25 14:01:03.311799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.311 [2024-07-25 14:01:03.311863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.311 [2024-07-25 14:01:03.311879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.311 [2024-07-25 14:01:03.329035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.311 [2024-07-25 14:01:03.329099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.311 [2024-07-25 14:01:03.329114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.346362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.346426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.346443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.364263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.364350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.364368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.381516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.381578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.381593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.398728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.398792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.398807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.415962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.416026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.416041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.433200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.433278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.433296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.450343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.450398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.450413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.467484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.467544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.467559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.484608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.484661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.484675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.501651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.501707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.501721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.518791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.518849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.518864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.535917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.535975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.535989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.553256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.553323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.553339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.570910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.570974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.570990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.569 [2024-07-25 14:01:03.588290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.569 [2024-07-25 14:01:03.588346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.569 [2024-07-25 14:01:03.588362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.827 [2024-07-25 14:01:03.605751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.827 [2024-07-25 14:01:03.605813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.827 [2024-07-25 14:01:03.605828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.827 [2024-07-25 14:01:03.623150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.827 [2024-07-25 14:01:03.623219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.827 [2024-07-25 14:01:03.623235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.827 [2024-07-25 14:01:03.641265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.827 [2024-07-25 14:01:03.641390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.827 [2024-07-25 14:01:03.641419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.827 [2024-07-25 14:01:03.659321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x218b4f0) 00:18:14.827 [2024-07-25 14:01:03.659386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.827 [2024-07-25 14:01:03.659402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.827 00:18:14.827 Latency(us) 00:18:14.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.827 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:14.827 nvme0n1 : 2.01 14527.93 56.75 0.00 0.00 8803.06 8043.05 33602.09 00:18:14.827 =================================================================================================================== 00:18:14.827 Total : 14527.93 56.75 0.00 0.00 8803.06 8043.05 33602.09 00:18:14.827 0 00:18:14.827 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:14.827 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:14.827 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:14.827 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:14.827 | .driver_specific 00:18:14.827 | .nvme_error 00:18:14.827 | .status_code 00:18:14.827 | .command_transient_transport_error' 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 114 > 0 )) 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79822 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79822 ']' 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79822 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79822 00:18:15.086 killing process with pid 79822 00:18:15.086 Received shutdown signal, test time was about 2.000000 seconds 00:18:15.086 00:18:15.086 Latency(us) 00:18:15.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.086 =================================================================================================================== 00:18:15.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79822' 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79822 00:18:15.086 14:01:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79822 00:18:15.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79882 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79882 /var/tmp/bperf.sock 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79882 ']' 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.344 14:01:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.344 [2024-07-25 14:01:04.239901] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:15.344 [2024-07-25 14:01:04.240281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:18:15.344 Zero copy mechanism will not be used. 00:18:15.344 llocations --file-prefix=spdk_pid79882 ] 00:18:15.602 [2024-07-25 14:01:04.378927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.602 [2024-07-25 14:01:04.509565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.602 [2024-07-25 14:01:04.565293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:16.169 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.169 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:16.170 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:16.170 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:16.736 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:16.736 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.736 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.736 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.736 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:16.736 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:16.995 nvme0n1 00:18:16.995 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:16.995 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.995 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.995 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.995 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:16.995 14:01:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:16.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:16.995 Zero copy mechanism will not be used. 00:18:16.995 Running I/O for 2 seconds... 00:18:16.995 [2024-07-25 14:01:05.959019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.995 [2024-07-25 14:01:05.959086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.995 [2024-07-25 14:01:05.959103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.995 [2024-07-25 14:01:05.963577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.995 [2024-07-25 14:01:05.963625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.995 [2024-07-25 14:01:05.963640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:16.995 [2024-07-25 14:01:05.968224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:05.968274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:05.968289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:05.972713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:05.972763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:05.972779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:05.977192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:05.977241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:05.977257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:05.981621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:05.981667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:05.981683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:05.986105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:05.986153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:05.986169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:05.990539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:05.990586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:05.990601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:05.995020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:05.995069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:05.995085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:05.999568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:05.999617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:05.999633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:06.004055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:06.004102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:06.004131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:06.008537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:06.008582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:06.008597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:06.012866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:06.012916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:06.012931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:06.017613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:06.017663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:06.017679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:16.996 [2024-07-25 14:01:06.022065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:16.996 [2024-07-25 14:01:06.022110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.996 [2024-07-25 14:01:06.022125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.026505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.026551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.026567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.030939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.030985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.031000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.035449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.035496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.035511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.040036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.040084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.040100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.045175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.045220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.045236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.050512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.050559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.050574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.055857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.055906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.055923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.061142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.061198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.061218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.066366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.066412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.066426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.071608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.071660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.071674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.076389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.076440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.076454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.081436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.081483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.081497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.086472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.086518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.086532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.091701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.091748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.091763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.096935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.096984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.096998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.101815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.101865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.101887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.106314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.106360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.106375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.110769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.110814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.256 [2024-07-25 14:01:06.110828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.256 [2024-07-25 14:01:06.115211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.256 [2024-07-25 14:01:06.115255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.115270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.119629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.119675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.119690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.124083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.124147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.124163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.128830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.128882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.128897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.133440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.133499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.133513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.137870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.137919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.137933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.142331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.142380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.142395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.146881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.146925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.146939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.151277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.151338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.151353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.155728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.155776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.155790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.160187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.160242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.160263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.164514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.164560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.164575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.168977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.169027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.169042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.173357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.173405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.173419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.177849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.177899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.177914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.182217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.182265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.182281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.187048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.187101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.187116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.191626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.191679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.191693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.196098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.196159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.196174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.200620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.200671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.200686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.205123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.205173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.205188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.209575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.209624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.209640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.214010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.214056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.214070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.218514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.218561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.218576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.222956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.223001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.223015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.227394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.227439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.227452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.231754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.257 [2024-07-25 14:01:06.231801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.257 [2024-07-25 14:01:06.231815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.257 [2024-07-25 14:01:06.236154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.236210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.236235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.240589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.240635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.240649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.245042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.245089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.245103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.249445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.249490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.249506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.253873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.253921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.253934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.258429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.258476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.258491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.262849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.262904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.262919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.267312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.267353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.267367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.271779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.271825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.271839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.276227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.276275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.276288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.280629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.280676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.280690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.258 [2024-07-25 14:01:06.285057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.258 [2024-07-25 14:01:06.285103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.258 [2024-07-25 14:01:06.285117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.289567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.289615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.289629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.293984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.294030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.294044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.298534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.298579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.298594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.302859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.302904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.302918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.307215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.307269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.307285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.311554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.311598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.311612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.315966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.316019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.316034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.320451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.320506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.320520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.324816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.324863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.324877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.329271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.329330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.329344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.333725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.333775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.333789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.338252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.338314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.338330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.342688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.342735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.342749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.347122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.347167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.347181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.351565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.351606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.351620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.356000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.356046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.356060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.360544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.360592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.518 [2024-07-25 14:01:06.360606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.518 [2024-07-25 14:01:06.364985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.518 [2024-07-25 14:01:06.365031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.365045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.369475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.369518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.369531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.374010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.374053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.374067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.378705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.378749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.378762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.383104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.383151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.383165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.388149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.388199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.388214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.392934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.392985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.392999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.397413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.397464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.397480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.401850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.401901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.401915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.406327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.406373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.406388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.410808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.410854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.410868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.415323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.415369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.415383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.420021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.420070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.420085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.424634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.424699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.424722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.429197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.429252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.429267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.433721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.433770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.433785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.438260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.438322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.438337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.442924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.442973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.442987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.447454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.447497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.447512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.452115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.452161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.452176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.456600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.519 [2024-07-25 14:01:06.456648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.519 [2024-07-25 14:01:06.456663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.519 [2024-07-25 14:01:06.461296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.461357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.461372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.465811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.465859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.465874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.470511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.470556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.470570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.474977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.475022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.475036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.479858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.479907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.479922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.484396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.484449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.484464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.489590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.489648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.489663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.494093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.494143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.494157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.498601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.498646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.498661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.503010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.503061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.503075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.507509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.507550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.507565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.511890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.511934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.511948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.516352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.516397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.516412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.520892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.520936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.520950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.525325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.525366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.525380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.529657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.529699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.529714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.534063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.534107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.534121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.538424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.538468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.538482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.542718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.542762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.542776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.520 [2024-07-25 14:01:06.547096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.520 [2024-07-25 14:01:06.547140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.520 [2024-07-25 14:01:06.547155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.551495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.551535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.551548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.555885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.555930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.555944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.560332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.560377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.560391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.564705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.564751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.564765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.569099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.569142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.569156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.573560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.573605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.573619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.578033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.578080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.578095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.582533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.582579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.582593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.586880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.586926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.586940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.591679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.591721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.591735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.596162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.596211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.596232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.600566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.600610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.600624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.605051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.605106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.605120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.609500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.609544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.609558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.613837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.613884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.613898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.618963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.619033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.619057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.623554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.623600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.623614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.628582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.628629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.628644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.633153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.633196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.633212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.637501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.637543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.637557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.641941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.641982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.641997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.646680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.646722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.646736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.651471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.651512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.651526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.655882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.655924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.781 [2024-07-25 14:01:06.655938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.781 [2024-07-25 14:01:06.660254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.781 [2024-07-25 14:01:06.660296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.660324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.664655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.664697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.664711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.669081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.669123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.669137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.673632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.673676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.673690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.678002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.678045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.678060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.682423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.682466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.682480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.686845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.686887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.686900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.691247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.691288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.691316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.695655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.695697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.695712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.700097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.700159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.700173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.704523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.704567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.704582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.708905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.708949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.708963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.713412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.713456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.713470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.717898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.717944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.717958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.722385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.722427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.722441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.726829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.726878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.726892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.731913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.731961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.731976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.736437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.736483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.736497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.741112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.741162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.741177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.745624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.745671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.745685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.750108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.750155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.750169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.754610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.754655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.754670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.759041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.759086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.759100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.763493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.763534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.763548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.767822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.767867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.767881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.772277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.772337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.772351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.776630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.782 [2024-07-25 14:01:06.776674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.782 [2024-07-25 14:01:06.776688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.782 [2024-07-25 14:01:06.781000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.783 [2024-07-25 14:01:06.781043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.783 [2024-07-25 14:01:06.781057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.783 [2024-07-25 14:01:06.785444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.783 [2024-07-25 14:01:06.785487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.783 [2024-07-25 14:01:06.785501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.783 [2024-07-25 14:01:06.789883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.783 [2024-07-25 14:01:06.789928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.783 [2024-07-25 14:01:06.789942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.783 [2024-07-25 14:01:06.794325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.783 [2024-07-25 14:01:06.794366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.783 [2024-07-25 14:01:06.794380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.783 [2024-07-25 14:01:06.798766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.783 [2024-07-25 14:01:06.798810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.783 [2024-07-25 14:01:06.798824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.783 [2024-07-25 14:01:06.803233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.783 [2024-07-25 14:01:06.803279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.783 [2024-07-25 14:01:06.803293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.783 [2024-07-25 14:01:06.807683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:17.783 [2024-07-25 14:01:06.807723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.783 [2024-07-25 14:01:06.807737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.812195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.812250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.812268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.816631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.816674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.816687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.821047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.821092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.821105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.825526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.825571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.825584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.829877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.829923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.829937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.834170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.834215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.834228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.838620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.838665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.838679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.843082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.843126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.843139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.847539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.847581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.847595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.851920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.851963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.851977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.856336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.856380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.856393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.860736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.860781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.860794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.865054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.865098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.865112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.869523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.869567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.869580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.874016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.874061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.874075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.878401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.878445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.878459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.882819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.882862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.882876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.887267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.887325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.043 [2024-07-25 14:01:06.887340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.043 [2024-07-25 14:01:06.891677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.043 [2024-07-25 14:01:06.891722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.891736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.896170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.896231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.896253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.901191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.901238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.901252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.905557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.905602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.905617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.910001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.910049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.910062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.914505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.914557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.914570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.919173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.919219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.919233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.923652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.923693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.923707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.928086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.928141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.928156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.932542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.932586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.932600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.936996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.937041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.937055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.941534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.941703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.941831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.946443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.946492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.946508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.950802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.950847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.950861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.955266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.955323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.955339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.959751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.959795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.959810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.964272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.964345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.964360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.968805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.968850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.968864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.044 [2024-07-25 14:01:06.973287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.044 [2024-07-25 14:01:06.973344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.044 [2024-07-25 14:01:06.973359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:06.977664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:06.977709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:06.977723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:06.982123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:06.982170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:06.982184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:06.986526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:06.986571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:06.986585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:06.991057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:06.991105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:06.991118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:06.995481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:06.995528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:06.995543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.000118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.000166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.000179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.004669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.004715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.004730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.009080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.009126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.009140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.013372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.013419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.013433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.017819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.017872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.017887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.022491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.022538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.022552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.026982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.027033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.027058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.031588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.031637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.031651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.037067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.037126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.037142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.041713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.041762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.041776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.046640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.046703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.046719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.051147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.051196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.051219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.055998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.045 [2024-07-25 14:01:07.056064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.045 [2024-07-25 14:01:07.056084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.045 [2024-07-25 14:01:07.060642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.046 [2024-07-25 14:01:07.060690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.046 [2024-07-25 14:01:07.060704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.046 [2024-07-25 14:01:07.065157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.046 [2024-07-25 14:01:07.065208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.046 [2024-07-25 14:01:07.065230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.046 [2024-07-25 14:01:07.069791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.046 [2024-07-25 14:01:07.069839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.046 [2024-07-25 14:01:07.069853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.074317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.074364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.074378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.079141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.079199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.079222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.083666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.083709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.083724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.088151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.088199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.088220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.092817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.092867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.092882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.097208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.097265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.097282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.101963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.102016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.102039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.106551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.106598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.106614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.111183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.111243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.111260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.115834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.115880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.115894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.120718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.120768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.120782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.125258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.125315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.125331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.129739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.129784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.129798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.134179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.134230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.134247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.138664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.138707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.138721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.306 [2024-07-25 14:01:07.143135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.306 [2024-07-25 14:01:07.143177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.306 [2024-07-25 14:01:07.143191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.147566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.147609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.147624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.152193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.152242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.152257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.156857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.156902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.156916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.161296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.161354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.161368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.165720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.165764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.165778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.170276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.170330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.170344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.174751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.174793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.174807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.179101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.179143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.179157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.183586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.183628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.183642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.188047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.188093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.188119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.192556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.192595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.192609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.196963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.197009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.197023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.201430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.201476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.201491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.205760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.205811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.205825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.210262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.210317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.210332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.214624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.214668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.214681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.218974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.219017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.219030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.223449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.223489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.223503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.227770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.227811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.227824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.232232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.232274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.232288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.236593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.236650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.240952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.240996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.241010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.245319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.245358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.245372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.249750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.307 [2024-07-25 14:01:07.249794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.307 [2024-07-25 14:01:07.249808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.307 [2024-07-25 14:01:07.254072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.254114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.254129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.258562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.258606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.258620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.262923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.262966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.262980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.267364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.267404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.267417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.271712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.271755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.271769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.276334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.276378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.276391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.281196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.281242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.281256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.285569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.285616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.285630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.290022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.290064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.290078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.294513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.294560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.294574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.298891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.298933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.298947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.303559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.303601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.303615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.308025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.308066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.308079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.312558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.312601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.312615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.316987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.317030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.317044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.321442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.321485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.321500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.325799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.325844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.325858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.330242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.330287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.330312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.308 [2024-07-25 14:01:07.334748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.308 [2024-07-25 14:01:07.334790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.308 [2024-07-25 14:01:07.334804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.567 [2024-07-25 14:01:07.339130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.567 [2024-07-25 14:01:07.339173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.567 [2024-07-25 14:01:07.339187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.567 [2024-07-25 14:01:07.343538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.567 [2024-07-25 14:01:07.343583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.567 [2024-07-25 14:01:07.343598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.567 [2024-07-25 14:01:07.347983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.567 [2024-07-25 14:01:07.348028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.567 [2024-07-25 14:01:07.348042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.567 [2024-07-25 14:01:07.352531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.567 [2024-07-25 14:01:07.352574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.567 [2024-07-25 14:01:07.352589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.567 [2024-07-25 14:01:07.356967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.357011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.357025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.361405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.361449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.361463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.365767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.365811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.365825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.370226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.370270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.370284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.374655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.374696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.374710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.379130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.379174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.379188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.383575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.383616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.383630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.387952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.387994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.388007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.392389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.392429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.392443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.396691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.396733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.396747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.401079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.401123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.401138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.405534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.405579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.405593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.410566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.410611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.410625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.415011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.415055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.415069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.419591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.419653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.419676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.424017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.424062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.424076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.428530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.428573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.428588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.433169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.433221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.433243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.437965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.438012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.438026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.442625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.442672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.442687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.447063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.447108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.447122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.451782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.451826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.451842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.456170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.456221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.456243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.460827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.460876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.460891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.465271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.465330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.465345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.469988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.470035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.568 [2024-07-25 14:01:07.470049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.568 [2024-07-25 14:01:07.474426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.568 [2024-07-25 14:01:07.474469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.474483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.479269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.479333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.479347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.483891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.483938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.483953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.488490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.488538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.488552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.492899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.492946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.492960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.497685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.497755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.497779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.502363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.502418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.502433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.506730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.506777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.506791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.511151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.511195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.511209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.515556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.515597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.515612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.519892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.519935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.519948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.524373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.524416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.524430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.528936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.528981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.528994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.533598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.533644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.533658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.537992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.538036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.538050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.542447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.542490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.542504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.546861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.546904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.546919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.551263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.551314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.551329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.555655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.555698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.555712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.560040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.560081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.560095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.564405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.564447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.564460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.568680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.568722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.568735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.573095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.573139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.573152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.577571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.577617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.577631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.581920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.581964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.581977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.586289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.586347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.586362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.569 [2024-07-25 14:01:07.590813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.569 [2024-07-25 14:01:07.590858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.569 [2024-07-25 14:01:07.590872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.570 [2024-07-25 14:01:07.595289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.570 [2024-07-25 14:01:07.595343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.570 [2024-07-25 14:01:07.595357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.599695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.599736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.599750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.604226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.604270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.604284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.609026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.609072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.609086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.613452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.613496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.613510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.617847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.617890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.617904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.622233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.622281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.622296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.626782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.626831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.626846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.631255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.631313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.631329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.635766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.635819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.635833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.640288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.640347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.640362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.644928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.645000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.645022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.649851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.649899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.649914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.830 [2024-07-25 14:01:07.654337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.830 [2024-07-25 14:01:07.654380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.830 [2024-07-25 14:01:07.654394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.658801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.658845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.658860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.663294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.663343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.663356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.667689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.667731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.667745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.672176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.672220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.672234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.676641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.676682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.676696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.681056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.681098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.681112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.685502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.685545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.685559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.689956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.689999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.690014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.694456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.694498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.694513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.698988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.699035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.699050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.703408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.703450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.703464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.707844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.707885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.707899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.712280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.712333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.712347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.716627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.716670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.716684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.721068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.721112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.721126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.725555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.725599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.725613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.730008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.730052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.831 [2024-07-25 14:01:07.730066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.831 [2024-07-25 14:01:07.734474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.831 [2024-07-25 14:01:07.734519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.734532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.738934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.738982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.738996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.743379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.743424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.743438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.747779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.747820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.747835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.752154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.752200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.752214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.756572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.756617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.756631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.760983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.761027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.761041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.765510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.765562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.765577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.770241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.770295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.770323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.774730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.774774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.774788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.779133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.779173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.779187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.783558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.783599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.783613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.787998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.788040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.788054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.792837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.792878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.792892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.797272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.797335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.797350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.801743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.801792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.801806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.806202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.806247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.806261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.810649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.832 [2024-07-25 14:01:07.810691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.832 [2024-07-25 14:01:07.810705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.832 [2024-07-25 14:01:07.815118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.815159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.815174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.819517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.819571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.823904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.823943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.823957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.828278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.828327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.828340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.832777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.832819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.832833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.837242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.837283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.837297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.841667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.841708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.841721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.846083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.846122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.846136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.850411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.850451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.850464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.854759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.854802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.854816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.833 [2024-07-25 14:01:07.859068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:18.833 [2024-07-25 14:01:07.859110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.833 [2024-07-25 14:01:07.859123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.092 [2024-07-25 14:01:07.863498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.092 [2024-07-25 14:01:07.863540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.092 [2024-07-25 14:01:07.863553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.092 [2024-07-25 14:01:07.867966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.092 [2024-07-25 14:01:07.868010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.092 [2024-07-25 14:01:07.868025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.092 [2024-07-25 14:01:07.872490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.092 [2024-07-25 14:01:07.872542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.092 [2024-07-25 14:01:07.872556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.092 [2024-07-25 14:01:07.877798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.877862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.877884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.882416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.882463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.882477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.886901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.886947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.886962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.891400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.891449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.891463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.895855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.895899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.895914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.900365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.900410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.900424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.904959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.905007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.909846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.909893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.909907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.914276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.914333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.914348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.918738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.918785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.918799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.923197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.923243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.923256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.927630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.927673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.927688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.932019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.932062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.932076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.936442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.936486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.936500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.940903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.940950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.940965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.945353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.945397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.945412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.093 [2024-07-25 14:01:07.949888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbed200) 00:18:19.093 [2024-07-25 14:01:07.949938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.093 [2024-07-25 14:01:07.949952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.093 00:18:19.093 Latency(us) 00:18:19.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.093 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:19.093 nvme0n1 : 2.00 6855.73 856.97 0.00 0.00 2330.30 2010.76 5808.87 00:18:19.093 =================================================================================================================== 00:18:19.093 Total : 6855.73 856.97 0.00 0.00 2330.30 2010.76 5808.87 00:18:19.093 0 00:18:19.093 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:19.093 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:19.093 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:19.093 14:01:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:19.093 | .driver_specific 00:18:19.093 | .nvme_error 00:18:19.093 | .status_code 00:18:19.093 | .command_transient_transport_error' 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 442 > 0 )) 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79882 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79882 ']' 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79882 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79882 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:19.352 killing process with pid 79882 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79882' 00:18:19.352 Received shutdown signal, test time was about 2.000000 seconds 00:18:19.352 00:18:19.352 Latency(us) 00:18:19.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.352 =================================================================================================================== 00:18:19.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79882 00:18:19.352 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79882 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79941 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79941 /var/tmp/bperf.sock 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79941 ']' 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.611 14:01:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.611 [2024-07-25 14:01:08.614349] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:19.611 [2024-07-25 14:01:08.614510] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79941 ] 00:18:19.869 [2024-07-25 14:01:08.762889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.869 [2024-07-25 14:01:08.885654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.128 [2024-07-25 14:01:08.939943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:20.695 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.695 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:20.695 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:20.695 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:20.999 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:20.999 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.999 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.999 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.999 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:20.999 14:01:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.291 nvme0n1 00:18:21.291 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:21.291 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.291 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.291 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.291 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:21.291 14:01:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:21.550 Running I/O for 2 seconds... 00:18:21.550 [2024-07-25 14:01:10.361331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fef90 00:18:21.550 [2024-07-25 14:01:10.363921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.363969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.377946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190feb58 00:18:21.550 [2024-07-25 14:01:10.380602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.380674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.394791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fe2e8 00:18:21.550 [2024-07-25 14:01:10.397444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.397495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.411467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fda78 00:18:21.550 [2024-07-25 14:01:10.414032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.414075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.428222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fd208 00:18:21.550 [2024-07-25 14:01:10.430801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.430852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.444726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fc998 00:18:21.550 [2024-07-25 14:01:10.447338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.447378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.461338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fc128 00:18:21.550 [2024-07-25 14:01:10.463926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.463971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.478276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fb8b8 00:18:21.550 [2024-07-25 14:01:10.480741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.480785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.494833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fb048 00:18:21.550 [2024-07-25 14:01:10.497239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.497283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.511462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190fa7d8 00:18:21.550 [2024-07-25 14:01:10.513897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.513964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.528103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f9f68 00:18:21.550 [2024-07-25 14:01:10.530526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.530567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.544936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f96f8 00:18:21.550 [2024-07-25 14:01:10.547397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.550 [2024-07-25 14:01:10.547438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:21.550 [2024-07-25 14:01:10.561920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f8e88 00:18:21.551 [2024-07-25 14:01:10.564356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.551 [2024-07-25 14:01:10.564394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:21.551 [2024-07-25 14:01:10.578778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f8618 00:18:21.810 [2024-07-25 14:01:10.581136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.810 [2024-07-25 14:01:10.581181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:21.810 [2024-07-25 14:01:10.595638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f7da8 00:18:21.810 [2024-07-25 14:01:10.597921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.810 [2024-07-25 14:01:10.597964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:21.810 [2024-07-25 14:01:10.612228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f7538 00:18:21.810 [2024-07-25 14:01:10.614587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.810 [2024-07-25 14:01:10.614630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:21.810 [2024-07-25 14:01:10.629284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f6cc8 00:18:21.810 [2024-07-25 14:01:10.631609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.810 [2024-07-25 14:01:10.631651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.810 [2024-07-25 14:01:10.645776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f6458 00:18:21.810 [2024-07-25 14:01:10.648171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.810 [2024-07-25 14:01:10.648214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:21.810 [2024-07-25 14:01:10.662572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f5be8 00:18:21.810 [2024-07-25 14:01:10.664932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.810 [2024-07-25 14:01:10.664979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:21.810 [2024-07-25 14:01:10.679137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f5378 00:18:21.810 [2024-07-25 14:01:10.681335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.810 [2024-07-25 14:01:10.681375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:21.810 [2024-07-25 14:01:10.695707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f4b08 00:18:21.811 [2024-07-25 14:01:10.697906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.697949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:21.811 [2024-07-25 14:01:10.712401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f4298 00:18:21.811 [2024-07-25 14:01:10.714571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.714616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:21.811 [2024-07-25 14:01:10.729259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f3a28 00:18:21.811 [2024-07-25 14:01:10.731391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.731439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:21.811 [2024-07-25 14:01:10.746072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f31b8 00:18:21.811 [2024-07-25 14:01:10.748217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.748262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:21.811 [2024-07-25 14:01:10.762829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f2948 00:18:21.811 [2024-07-25 14:01:10.765093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.765137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:21.811 [2024-07-25 14:01:10.779521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f20d8 00:18:21.811 [2024-07-25 14:01:10.781821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.781871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:21.811 [2024-07-25 14:01:10.796208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f1868 00:18:21.811 [2024-07-25 14:01:10.798451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.798498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:21.811 [2024-07-25 14:01:10.813012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f0ff8 00:18:21.811 [2024-07-25 14:01:10.815101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.815149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:21.811 [2024-07-25 14:01:10.829865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f0788 00:18:21.811 [2024-07-25 14:01:10.831897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.811 [2024-07-25 14:01:10.831936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.846774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190eff18 00:18:22.070 [2024-07-25 14:01:10.848779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.070 [2024-07-25 14:01:10.848822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.863441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ef6a8 00:18:22.070 [2024-07-25 14:01:10.865460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.070 [2024-07-25 14:01:10.865516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.880377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190eee38 00:18:22.070 [2024-07-25 14:01:10.882332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.070 [2024-07-25 14:01:10.882371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.896973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ee5c8 00:18:22.070 [2024-07-25 14:01:10.899182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.070 [2024-07-25 14:01:10.899228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.913830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190edd58 00:18:22.070 [2024-07-25 14:01:10.916010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.070 [2024-07-25 14:01:10.916061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.930560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ed4e8 00:18:22.070 [2024-07-25 14:01:10.932460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.070 [2024-07-25 14:01:10.932503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.946963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ecc78 00:18:22.070 [2024-07-25 14:01:10.948873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.070 [2024-07-25 14:01:10.948921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.963623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ec408 00:18:22.070 [2024-07-25 14:01:10.965521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.070 [2024-07-25 14:01:10.965574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:22.070 [2024-07-25 14:01:10.980383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ebb98 00:18:22.070 [2024-07-25 14:01:10.982217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.071 [2024-07-25 14:01:10.982261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:22.071 [2024-07-25 14:01:10.997002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190eb328 00:18:22.071 [2024-07-25 14:01:10.998825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.071 [2024-07-25 14:01:10.998865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:22.071 [2024-07-25 14:01:11.013528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190eaab8 00:18:22.071 [2024-07-25 14:01:11.015357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.071 [2024-07-25 14:01:11.015402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:22.071 [2024-07-25 14:01:11.030194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ea248 00:18:22.071 [2024-07-25 14:01:11.032012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.071 [2024-07-25 14:01:11.032061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:22.071 [2024-07-25 14:01:11.047053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e99d8 00:18:22.071 [2024-07-25 14:01:11.048846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.071 [2024-07-25 14:01:11.048889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:22.071 [2024-07-25 14:01:11.063499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e9168 00:18:22.071 [2024-07-25 14:01:11.065243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.071 [2024-07-25 14:01:11.065289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:22.071 [2024-07-25 14:01:11.080058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e88f8 00:18:22.071 [2024-07-25 14:01:11.081836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.071 [2024-07-25 14:01:11.081877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:22.071 [2024-07-25 14:01:11.096805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e8088 00:18:22.071 [2024-07-25 14:01:11.098555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.071 [2024-07-25 14:01:11.098598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:22.329 [2024-07-25 14:01:11.113831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e7818 00:18:22.330 [2024-07-25 14:01:11.115513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.115558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.130571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e6fa8 00:18:22.330 [2024-07-25 14:01:11.132256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.132314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.147578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e6738 00:18:22.330 [2024-07-25 14:01:11.149291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.149342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.164676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e5ec8 00:18:22.330 [2024-07-25 14:01:11.166320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.166359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.181704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e5658 00:18:22.330 [2024-07-25 14:01:11.183322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.183358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.198606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e4de8 00:18:22.330 [2024-07-25 14:01:11.200323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.200364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.215623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e4578 00:18:22.330 [2024-07-25 14:01:11.217204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.217245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.232634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e3d08 00:18:22.330 [2024-07-25 14:01:11.234187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.234229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.249721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e3498 00:18:22.330 [2024-07-25 14:01:11.251323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.251371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.266717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e2c28 00:18:22.330 [2024-07-25 14:01:11.268319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.268357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.283567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e23b8 00:18:22.330 [2024-07-25 14:01:11.285092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.285135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.300517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e1b48 00:18:22.330 [2024-07-25 14:01:11.301999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.302040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.317241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e12d8 00:18:22.330 [2024-07-25 14:01:11.318759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.318800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.334129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e0a68 00:18:22.330 [2024-07-25 14:01:11.335563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.335605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:22.330 [2024-07-25 14:01:11.351131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e01f8 00:18:22.330 [2024-07-25 14:01:11.352572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.330 [2024-07-25 14:01:11.352627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:22.589 [2024-07-25 14:01:11.368041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190df988 00:18:22.589 [2024-07-25 14:01:11.369470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.589 [2024-07-25 14:01:11.369521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:22.589 [2024-07-25 14:01:11.384739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190df118 00:18:22.589 [2024-07-25 14:01:11.386102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.589 [2024-07-25 14:01:11.386143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:22.589 [2024-07-25 14:01:11.401260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190de8a8 00:18:22.589 [2024-07-25 14:01:11.402613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.589 [2024-07-25 14:01:11.402651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:22.589 [2024-07-25 14:01:11.417943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190de038 00:18:22.589 [2024-07-25 14:01:11.419260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.419319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.442093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190de038 00:18:22.590 [2024-07-25 14:01:11.444747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.444793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.459515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190de8a8 00:18:22.590 [2024-07-25 14:01:11.462125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.462169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.476528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190df118 00:18:22.590 [2024-07-25 14:01:11.479123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.479167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.493335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190df988 00:18:22.590 [2024-07-25 14:01:11.495859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.495899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.509945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e01f8 00:18:22.590 [2024-07-25 14:01:11.512482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.512524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.526715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e0a68 00:18:22.590 [2024-07-25 14:01:11.529211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.529253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.543459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e12d8 00:18:22.590 [2024-07-25 14:01:11.546119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.546169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.560368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e1b48 00:18:22.590 [2024-07-25 14:01:11.562811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.562862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.577222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e23b8 00:18:22.590 [2024-07-25 14:01:11.579717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.579766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.594341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e2c28 00:18:22.590 [2024-07-25 14:01:11.596778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.596828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:22.590 [2024-07-25 14:01:11.611014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e3498 00:18:22.590 [2024-07-25 14:01:11.613410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.590 [2024-07-25 14:01:11.613459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.627649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e3d08 00:18:22.849 [2024-07-25 14:01:11.630108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.630153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.644435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e4578 00:18:22.849 [2024-07-25 14:01:11.646834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.646881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.661473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e4de8 00:18:22.849 [2024-07-25 14:01:11.663821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.663873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.678276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e5658 00:18:22.849 [2024-07-25 14:01:11.680724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.680785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.695199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e5ec8 00:18:22.849 [2024-07-25 14:01:11.697610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.697663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.712050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e6738 00:18:22.849 [2024-07-25 14:01:11.714327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.714369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.728828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e6fa8 00:18:22.849 [2024-07-25 14:01:11.731069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.731129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.745896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e7818 00:18:22.849 [2024-07-25 14:01:11.748246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.748311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.762950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e8088 00:18:22.849 [2024-07-25 14:01:11.765249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.765314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.780063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e88f8 00:18:22.849 [2024-07-25 14:01:11.782284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.782357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:22.849 [2024-07-25 14:01:11.797195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e9168 00:18:22.849 [2024-07-25 14:01:11.799401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.849 [2024-07-25 14:01:11.799460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:22.850 [2024-07-25 14:01:11.814180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190e99d8 00:18:22.850 [2024-07-25 14:01:11.816395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.850 [2024-07-25 14:01:11.816441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:22.850 [2024-07-25 14:01:11.831165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ea248 00:18:22.850 [2024-07-25 14:01:11.833348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.850 [2024-07-25 14:01:11.833404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:22.850 [2024-07-25 14:01:11.848029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190eaab8 00:18:22.850 [2024-07-25 14:01:11.850136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.850 [2024-07-25 14:01:11.850175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:22.850 [2024-07-25 14:01:11.864641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190eb328 00:18:22.850 [2024-07-25 14:01:11.866857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.850 [2024-07-25 14:01:11.866897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:23.108 [2024-07-25 14:01:11.881837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ebb98 00:18:23.108 [2024-07-25 14:01:11.884142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.108 [2024-07-25 14:01:11.884185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:23.108 [2024-07-25 14:01:11.898891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ec408 00:18:23.108 [2024-07-25 14:01:11.900951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.108 [2024-07-25 14:01:11.900992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:23.108 [2024-07-25 14:01:11.915725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ecc78 00:18:23.108 [2024-07-25 14:01:11.917775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.108 [2024-07-25 14:01:11.917817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:23.108 [2024-07-25 14:01:11.932865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ed4e8 00:18:23.108 [2024-07-25 14:01:11.934945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.108 [2024-07-25 14:01:11.934990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:23.108 [2024-07-25 14:01:11.949775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190edd58 00:18:23.108 [2024-07-25 14:01:11.951771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.108 [2024-07-25 14:01:11.951812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:23.108 [2024-07-25 14:01:11.966632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ee5c8 00:18:23.108 [2024-07-25 14:01:11.968683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.108 [2024-07-25 14:01:11.968724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.108 [2024-07-25 14:01:11.983767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190eee38 00:18:23.108 [2024-07-25 14:01:11.985728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.108 [2024-07-25 14:01:11.985769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:23.108 [2024-07-25 14:01:12.000608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190ef6a8 00:18:23.108 [2024-07-25 14:01:12.002566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.108 [2024-07-25 14:01:12.002608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:23.109 [2024-07-25 14:01:12.017352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190eff18 00:18:23.109 [2024-07-25 14:01:12.019297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.109 [2024-07-25 14:01:12.019346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:23.109 [2024-07-25 14:01:12.034170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f0788 00:18:23.109 [2024-07-25 14:01:12.036143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.109 [2024-07-25 14:01:12.036185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:23.109 [2024-07-25 14:01:12.051170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f0ff8 00:18:23.109 [2024-07-25 14:01:12.053063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.109 [2024-07-25 14:01:12.053120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:23.109 [2024-07-25 14:01:12.068142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f1868 00:18:23.109 [2024-07-25 14:01:12.070114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.109 [2024-07-25 14:01:12.070154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:23.109 [2024-07-25 14:01:12.084955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f20d8 00:18:23.109 [2024-07-25 14:01:12.086787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.109 [2024-07-25 14:01:12.086826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:23.109 [2024-07-25 14:01:12.101432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f2948 00:18:23.109 [2024-07-25 14:01:12.103321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.109 [2024-07-25 14:01:12.103360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:23.109 [2024-07-25 14:01:12.118064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f31b8 00:18:23.109 [2024-07-25 14:01:12.119843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.109 [2024-07-25 14:01:12.119890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:23.109 [2024-07-25 14:01:12.134694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f3a28 00:18:23.109 [2024-07-25 14:01:12.136465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.109 [2024-07-25 14:01:12.136504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.151428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f4298 00:18:23.368 [2024-07-25 14:01:12.153152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.153192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.167988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f4b08 00:18:23.368 [2024-07-25 14:01:12.169732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.169790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.184637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f5378 00:18:23.368 [2024-07-25 14:01:12.186432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.186478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.201662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f5be8 00:18:23.368 [2024-07-25 14:01:12.203401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.203442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.218605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f6458 00:18:23.368 [2024-07-25 14:01:12.220284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.220332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.235641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f6cc8 00:18:23.368 [2024-07-25 14:01:12.237316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.237351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.252525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f7538 00:18:23.368 [2024-07-25 14:01:12.254173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.254214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.269201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f7da8 00:18:23.368 [2024-07-25 14:01:12.270802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.270842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.285771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f8618 00:18:23.368 [2024-07-25 14:01:12.287499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.287541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.302707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f8e88 00:18:23.368 [2024-07-25 14:01:12.304413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.304454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.319698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f96f8 00:18:23.368 [2024-07-25 14:01:12.321242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.321286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:23.368 [2024-07-25 14:01:12.336894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbb650) with pdu=0x2000190f9f68 00:18:23.368 [2024-07-25 14:01:12.338412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.368 [2024-07-25 14:01:12.338454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:23.368 00:18:23.368 Latency(us) 00:18:23.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.368 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:23.368 nvme0n1 : 2.00 15040.48 58.75 0.00 0.00 8501.91 4259.84 31695.59 00:18:23.368 =================================================================================================================== 00:18:23.368 Total : 15040.48 58.75 0.00 0.00 8501.91 4259.84 31695.59 00:18:23.368 0 00:18:23.368 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:23.368 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:23.368 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:23.368 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:23.368 | .driver_specific 00:18:23.368 | .nvme_error 00:18:23.368 | .status_code 00:18:23.368 | .command_transient_transport_error' 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79941 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79941 ']' 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79941 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79941 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:23.935 killing process with pid 79941 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79941' 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79941 00:18:23.935 Received shutdown signal, test time was about 2.000000 seconds 00:18:23.935 00:18:23.935 Latency(us) 00:18:23.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.935 =================================================================================================================== 00:18:23.935 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.935 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79941 00:18:24.194 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:24.194 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:24.194 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:24.194 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:24.194 14:01:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80003 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80003 /var/tmp/bperf.sock 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80003 ']' 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.194 14:01:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:24.194 Zero copy mechanism will not be used. 00:18:24.195 [2024-07-25 14:01:13.055396] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:24.195 [2024-07-25 14:01:13.055496] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80003 ] 00:18:24.195 [2024-07-25 14:01:13.195617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.456 [2024-07-25 14:01:13.330161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.456 [2024-07-25 14:01:13.389348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:25.433 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:25.690 nvme0n1 00:18:25.949 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:25.949 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.949 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.949 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.949 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:25.949 14:01:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:25.949 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:25.949 Zero copy mechanism will not be used. 00:18:25.949 Running I/O for 2 seconds... 00:18:25.949 [2024-07-25 14:01:14.857956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.858315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.858346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.863573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.863881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.863913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.869017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.869319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.869364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.874270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.874583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.874614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.879502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.879798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.879828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.884632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.884933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.884964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.889713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.890013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.890046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.894900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.895245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.895274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.900245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.900562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.900591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.905473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.905776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.905805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.910613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.949 [2024-07-25 14:01:14.910913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.949 [2024-07-25 14:01:14.910942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.949 [2024-07-25 14:01:14.915677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.915970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.915999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.920842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.921143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.921172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.925917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.926216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.926245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.930993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.931294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.931336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.936080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.936402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.936438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.941139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.941460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.941486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.946264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.946589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.946620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.951383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.951692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.951721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.956530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.956828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.956857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.961592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.961905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.961933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.966709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.967019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.967048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.971830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.972150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.972179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.950 [2024-07-25 14:01:14.976920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:25.950 [2024-07-25 14:01:14.977218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.950 [2024-07-25 14:01:14.977257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:14.982016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:14.982325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:14.982354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:14.986970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:14.987263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:14.987291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:14.992069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:14.992390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:14.992429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:14.997141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:14.997464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:14.997493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.002205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.002518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.002550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.007286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.007594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.007624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.012388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.012696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.012725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.017517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.017810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.017838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.022612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.022905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.022935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.027655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.027952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.027981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.032797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.033103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.033133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.037935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.038236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.038265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.043029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.043338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.043367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.048141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.048456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.048486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.053288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.053607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.053636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.058460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.210 [2024-07-25 14:01:15.058762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.210 [2024-07-25 14:01:15.058792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.210 [2024-07-25 14:01:15.063595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.063893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.063923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.068817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.069115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.069145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.073927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.074225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.074256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.079112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.079434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.079460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.084271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.084597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.084626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.089344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.089655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.089684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.094470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.094795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.099724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.100026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.100056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.104902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.105200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.105231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.110012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.110328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.110352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.115113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.115447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.115480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.120251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.120583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.120616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.125478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.125789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.125819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.130599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.130909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.130939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.135777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.136075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.136115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.140909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.141208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.141238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.146089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.146412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.146442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.151166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.151480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.151510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.156327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.156651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.156681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.161493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.161790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.161819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.166635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.166934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.166964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.171732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.172039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.172068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.176851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.177145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.177174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.181896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.182193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.211 [2024-07-25 14:01:15.182222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.211 [2024-07-25 14:01:15.187017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.211 [2024-07-25 14:01:15.187336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.187365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.192054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.192383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.192412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.197104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.197415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.197444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.202222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.202539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.202570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.207371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.207668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.207697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.212430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.212728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.212756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.217488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.217782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.217810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.222517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.222814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.222842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.227558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.227852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.227881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.232610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.232907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.232936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.212 [2024-07-25 14:01:15.237723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.212 [2024-07-25 14:01:15.238017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.212 [2024-07-25 14:01:15.238046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.242773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.243066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.243101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.247875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.248183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.248212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.252964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.253258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.253286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.257988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.258280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.258319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.262993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.263285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.263332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.268045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.268364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.268392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.273077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.273390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.273425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.278180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.278495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.278523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.283233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.283547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.283577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.288313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.288606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.288635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.293378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.293680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.293709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.298483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.298780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.298816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.303541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.303863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.303892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.308698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.308992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.309021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.313751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.314045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.314073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.318817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.319113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.471 [2024-07-25 14:01:15.319136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.471 [2024-07-25 14:01:15.325423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.471 [2024-07-25 14:01:15.325719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.325749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.330525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.330823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.330850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.335599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.335900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.335927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.340620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.340919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.340947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.345623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.345926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.345953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.350689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.350987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.351015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.355704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.356000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.356027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.360782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.361080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.361108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.365846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.366147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.366175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.370899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.371195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.371222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.375951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.376257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.376284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.381042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.381353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.381381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.386106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.386421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.386448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.391185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.391502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.391534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.396234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.396547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.396574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.401425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.401745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.401788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.406669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.407036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.407063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.412038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.412388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.412416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.417242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.417609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.417642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.422491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.422787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.422814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.427595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.427894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.427920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.432799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.433113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.433141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.437946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.438244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.438272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.443178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.443502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.443534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.448377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.448673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.448700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.453517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.472 [2024-07-25 14:01:15.453818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.472 [2024-07-25 14:01:15.453845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.472 [2024-07-25 14:01:15.458794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.459096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.459124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.473 [2024-07-25 14:01:15.463917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.464250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.464278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.473 [2024-07-25 14:01:15.469121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.469458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.469485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.473 [2024-07-25 14:01:15.474264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.474607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.474634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.473 [2024-07-25 14:01:15.479482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.479789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.479816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.473 [2024-07-25 14:01:15.484759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.485062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.485089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.473 [2024-07-25 14:01:15.489952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.490257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.490285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.473 [2024-07-25 14:01:15.495061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.495369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.495396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.473 [2024-07-25 14:01:15.500259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.473 [2024-07-25 14:01:15.500584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.473 [2024-07-25 14:01:15.500612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.505459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.505755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.505782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.510588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.510906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.510934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.515674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.515974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.516004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.520816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.521114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.521143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.525892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.526196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.526226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.531075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.531389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.531416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.536183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.536500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.536524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.541415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.541717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.541746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.546528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.546830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.546854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.551745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.552058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.552086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.556847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.557150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.557180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.562047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.562353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.562382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.566912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.566998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.567021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.572088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.572188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.572211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.577341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.577422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.577445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.582645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.582727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.582750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.587800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.587883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.587905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.593063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.593137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.593160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.598261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.598362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.598385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.603418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.603506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.603529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.608601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.608686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.608708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.613741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.613836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.613859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.618878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.618954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.618977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.624173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.624249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.624273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.629464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.629537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.629560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.634666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.634739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.634762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.639790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.639876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.639898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.645032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.645122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.645148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.650279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.650394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.732 [2024-07-25 14:01:15.650418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.732 [2024-07-25 14:01:15.655545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.732 [2024-07-25 14:01:15.655614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.655636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.660617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.660715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.660737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.665890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.665958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.665981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.671161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.671259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.671282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.676353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.676434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.676457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.681488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.681572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.681595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.686477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.686562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.686585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.691697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.691798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.691820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.696821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.696908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.696930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.702050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.702169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.702191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.707233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.707351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.707375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.712296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.712398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.712426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.717592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.717686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.717709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.722725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.722794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.722818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.727791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.727871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.727894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.732979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.733096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.733123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.738200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.738289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.738324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.743402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.743479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.743501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.748540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.748612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.748643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.753756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.753834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.753857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.733 [2024-07-25 14:01:15.758954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:26.733 [2024-07-25 14:01:15.759031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.733 [2024-07-25 14:01:15.759058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.764036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.764162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.764189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.769289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.769379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.769402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.774419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.774490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.774513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.779535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.779606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.784755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.784843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.784865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.789981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.790049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.790071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.795100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.795193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.795217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.800387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.800468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.800491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.805522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.805606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.805629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.810704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.810776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.810799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.815668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.815742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.815764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.820652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.820757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.820779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.825831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.825907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.825930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.002 [2024-07-25 14:01:15.830966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.002 [2024-07-25 14:01:15.831040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.002 [2024-07-25 14:01:15.831063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.836093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.836179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.836202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.841213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.841325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.841354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.846391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.846466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.846489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.851585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.851659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.851682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.856749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.856827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.856854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.861878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.861950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.861973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.867022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.867098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.867121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.872170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.872244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.872266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.877255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.877375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.877398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.882496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.882609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.882636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.887691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.887781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.887808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.892776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.892857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.892880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.897912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.898003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.898025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.902963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.903056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.903078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.908069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.908147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.908170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.913232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.913322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.913345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.918335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.918418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.918441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.923569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.003 [2024-07-25 14:01:15.923652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.003 [2024-07-25 14:01:15.923675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.003 [2024-07-25 14:01:15.928906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.928991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.929013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.933885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.933982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.934005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.938962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.939055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.939077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.943972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.944066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.944088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.949158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.949289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.949317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.954255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.954362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.954385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.959448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.959546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.959569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.964547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.964620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.964649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.969616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.969703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.969725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.974819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.974938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.974965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.980011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.980117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.980140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.985336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.985597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.985622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.990577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.990684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.990706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:15.995660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:15.995736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:15.995758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:16.000806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:16.000883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:16.000906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:16.006058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:16.006136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:16.006159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:16.011276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:16.011386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:16.011409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:16.016469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:16.016555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:16.016577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:16.021612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:16.021687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:16.021710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.004 [2024-07-25 14:01:16.026825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.004 [2024-07-25 14:01:16.026930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.004 [2024-07-25 14:01:16.026953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.264 [2024-07-25 14:01:16.032021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.264 [2024-07-25 14:01:16.032100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.032133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.037160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.037347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.037374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.042511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.042693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.042724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.047744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.047857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.047880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.053139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.053226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.053249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.058561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.058645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.058667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.064039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.064128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.064151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.069350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.069448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.069470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.074512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.074597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.074620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.079832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.079945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.079972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.085089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.085219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.085241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.090238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.090352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.090376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.095426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.095521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.095543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.100672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.100778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.100800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.105965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.106062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.106084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.111163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.111270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.111293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.116199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.116340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.116362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.121349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.121433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.121456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.126500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.126621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.126643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.131731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.131839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.131861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.136936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.137083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.137117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.141935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.142027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.142059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.146972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.147048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.147070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.152163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.152237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.152260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.157335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.157437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.157459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.162488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.162594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.162616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.167689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.167775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.265 [2024-07-25 14:01:16.167797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.265 [2024-07-25 14:01:16.172805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.265 [2024-07-25 14:01:16.172890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.172913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.177859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.177957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.177980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.183129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.183202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.183224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.188386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.188489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.188511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.193517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.193599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.193622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.198796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.198870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.198893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.204016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.204172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.204199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.209217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.209363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.209385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.214359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.214494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.214533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.219651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.219725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.219748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.224854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.224943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.224965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.230002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.230076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.230099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.235177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.235264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.235286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.239992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.240176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.240199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.245057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.245382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.245408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.250242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.250564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.250604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.255557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.255882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.255914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.260767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.261070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.261102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.265879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.266179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.266212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.271028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.271368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.271401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.276195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.276517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.276549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.281335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.281636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.281668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.286578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.286893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.286926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.266 [2024-07-25 14:01:16.291697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.266 [2024-07-25 14:01:16.291993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.266 [2024-07-25 14:01:16.292026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.296976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.297280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.297325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.302181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.302494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.302531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.307363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.307662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.307695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.312500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.312805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.312837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.317754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.318057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.318090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.322893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.323192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.323225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.328121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.328472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.328513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.333394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.333697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.333730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.338312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.338401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.338423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.343390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.343490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.343512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.348555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.348626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.348653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.353769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.353844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.353871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.358804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.358878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.358906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.363971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.364040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.364063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.369145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.369227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.526 [2024-07-25 14:01:16.369255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.526 [2024-07-25 14:01:16.374222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.526 [2024-07-25 14:01:16.374292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.374329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.379487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.379562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.379584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.384630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.384702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.384724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.389704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.389785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.389808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.394707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.394787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.394815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.399772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.399854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.399877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.404936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.405009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.405032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.409967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.410040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.410063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.415083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.415154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.415177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.420279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.420359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.420382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.425401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.425495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.425517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.430453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.430535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.430557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.435544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.435611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.435634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.440650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.440722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.440744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.445725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.445817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.445839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.450893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.450987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.451024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.455997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.456090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.456123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.461201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.461274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.461297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.466404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.466488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.466521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.471531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.471605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.471632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.476634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.476725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.476749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.481817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.481898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.481921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.486930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.487006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.487029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.492141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.492240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.492262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.497235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.497346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.497383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.502393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.527 [2024-07-25 14:01:16.502472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.527 [2024-07-25 14:01:16.502495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.527 [2024-07-25 14:01:16.507472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.507570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.507597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.512732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.512820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.512843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.518105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.518189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.518212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.523279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.523371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.523394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.528472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.528561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.528583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.533717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.533789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.533812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.538885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.538962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.538985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.544001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.544081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.544104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.549047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.549153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.549175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.528 [2024-07-25 14:01:16.554225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.528 [2024-07-25 14:01:16.554358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.528 [2024-07-25 14:01:16.554382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.787 [2024-07-25 14:01:16.559344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.787 [2024-07-25 14:01:16.559424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.787 [2024-07-25 14:01:16.559447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.787 [2024-07-25 14:01:16.564579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.787 [2024-07-25 14:01:16.564650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.787 [2024-07-25 14:01:16.564673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.787 [2024-07-25 14:01:16.569946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.570052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.570074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.575250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.575357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.575380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.580251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.580352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.580382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.585412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.585482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.585505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.590591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.590674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.590697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.595742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.595840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.595863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.600983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.601079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.601102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.606165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.606247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.606271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.611499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.611582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.611605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.616723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.616834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.616857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.622074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.622158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.622181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.627188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.627292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.627329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.632278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.632366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.632395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.637500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.637580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.637603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.642540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.642606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.642635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.647722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.647801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.647824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.652905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.652987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.653009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.658029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.658165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.658203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.663056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.663134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.788 [2024-07-25 14:01:16.663157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.788 [2024-07-25 14:01:16.668280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.788 [2024-07-25 14:01:16.668382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.668405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.673455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.673552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.673575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.678559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.678646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.678668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.683740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.683818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.683841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.688973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.689056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.689078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.694199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.694274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.694296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.699315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.699396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.699419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.703919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.704240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.704273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.709132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.709446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.709480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.714331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.714637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.714670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.719615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.719916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.719950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.724813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.725113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.725146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.730013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.730326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.730358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.735378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.735677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.735710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.740534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.740831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.740864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.745781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.746076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.746109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.751007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.751320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.751353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.756254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.756577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.756610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.761482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.761784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.761819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.766585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.789 [2024-07-25 14:01:16.766908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.789 [2024-07-25 14:01:16.766942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.789 [2024-07-25 14:01:16.771838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.772146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.772178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.790 [2024-07-25 14:01:16.776962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.777274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.777320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.790 [2024-07-25 14:01:16.782196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.782513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.782545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.790 [2024-07-25 14:01:16.787578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.787890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.787922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.790 [2024-07-25 14:01:16.792694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.793021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.793054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.790 [2024-07-25 14:01:16.797938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.798239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.798272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.790 [2024-07-25 14:01:16.803032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.803348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.803381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.790 [2024-07-25 14:01:16.808273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.808587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.808619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.790 [2024-07-25 14:01:16.813462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:27.790 [2024-07-25 14:01:16.813762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.790 [2024-07-25 14:01:16.813794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.049 [2024-07-25 14:01:16.818801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:28.049 [2024-07-25 14:01:16.819086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.049 [2024-07-25 14:01:16.819119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.049 [2024-07-25 14:01:16.823798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:28.049 [2024-07-25 14:01:16.823865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.049 [2024-07-25 14:01:16.823888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.049 [2024-07-25 14:01:16.828944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:28.049 [2024-07-25 14:01:16.829017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.049 [2024-07-25 14:01:16.829046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.049 [2024-07-25 14:01:16.834144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:28.049 [2024-07-25 14:01:16.834219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.049 [2024-07-25 14:01:16.834241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.049 [2024-07-25 14:01:16.839420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:28.049 [2024-07-25 14:01:16.839496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.049 [2024-07-25 14:01:16.839519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.049 [2024-07-25 14:01:16.844525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:28.049 [2024-07-25 14:01:16.844616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.049 [2024-07-25 14:01:16.844639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.049 [2024-07-25 14:01:16.849663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd7f080) with pdu=0x2000190fef90 00:18:28.049 [2024-07-25 14:01:16.849729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.049 [2024-07-25 14:01:16.849752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.049 00:18:28.049 Latency(us) 00:18:28.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.049 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:28.049 nvme0n1 : 2.00 6004.25 750.53 0.00 0.00 2658.64 1549.03 6583.39 00:18:28.049 =================================================================================================================== 00:18:28.049 Total : 6004.25 750.53 0.00 0.00 2658.64 1549.03 6583.39 00:18:28.049 0 00:18:28.049 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:28.049 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:28.049 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:28.049 14:01:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:28.049 | .driver_specific 00:18:28.049 | .nvme_error 00:18:28.049 | .status_code 00:18:28.049 | .command_transient_transport_error' 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 387 > 0 )) 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80003 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80003 ']' 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80003 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80003 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:28.307 killing process with pid 80003 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80003' 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80003 00:18:28.307 Received shutdown signal, test time was about 2.000000 seconds 00:18:28.307 00:18:28.307 Latency(us) 00:18:28.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.307 =================================================================================================================== 00:18:28.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.307 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80003 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79789 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79789 ']' 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79789 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79789 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.567 killing process with pid 79789 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79789' 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79789 00:18:28.567 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79789 00:18:28.839 00:18:28.839 real 0m19.048s 00:18:28.839 user 0m37.292s 00:18:28.839 sys 0m4.795s 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:28.839 ************************************ 00:18:28.839 END TEST nvmf_digest_error 00:18:28.839 ************************************ 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.839 rmmod nvme_tcp 00:18:28.839 rmmod nvme_fabrics 00:18:28.839 rmmod nvme_keyring 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79789 ']' 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79789 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 79789 ']' 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 79789 00:18:28.839 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79789) - No such process 00:18:28.839 Process with pid 79789 is not found 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 79789 is not found' 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.839 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:29.098 00:18:29.098 real 0m38.889s 00:18:29.098 user 1m13.532s 00:18:29.098 sys 0m10.963s 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:29.098 ************************************ 00:18:29.098 END TEST nvmf_digest 00:18:29.098 ************************************ 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.098 ************************************ 00:18:29.098 START TEST nvmf_host_multipath 00:18:29.098 ************************************ 00:18:29.098 14:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:29.098 * Looking for test storage... 00:18:29.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:29.098 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:29.099 Cannot find device "nvmf_tgt_br" 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:29.099 Cannot find device "nvmf_tgt_br2" 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:29.099 Cannot find device "nvmf_tgt_br" 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:29.099 Cannot find device "nvmf_tgt_br2" 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:29.099 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:29.357 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:29.357 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:29.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.357 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:29.357 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:29.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:29.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:18:29.358 00:18:29.358 --- 10.0.0.2 ping statistics --- 00:18:29.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.358 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:29.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:29.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:18:29.358 00:18:29.358 --- 10.0.0.3 ping statistics --- 00:18:29.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.358 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:29.358 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:29.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:29.616 00:18:29.616 --- 10.0.0.1 ping statistics --- 00:18:29.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.616 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80276 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80276 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80276 ']' 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.616 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:29.616 [2024-07-25 14:01:18.479085] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:29.616 [2024-07-25 14:01:18.479208] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.616 [2024-07-25 14:01:18.618652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:29.874 [2024-07-25 14:01:18.752826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.874 [2024-07-25 14:01:18.752894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.874 [2024-07-25 14:01:18.752909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.874 [2024-07-25 14:01:18.752919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.874 [2024-07-25 14:01:18.752928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.874 [2024-07-25 14:01:18.753081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.874 [2024-07-25 14:01:18.753096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.874 [2024-07-25 14:01:18.810968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80276 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:30.807 [2024-07-25 14:01:19.793933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.807 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:31.373 Malloc0 00:18:31.373 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:31.631 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:31.889 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.148 [2024-07-25 14:01:21.069389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.149 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:32.449 [2024-07-25 14:01:21.353503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:32.449 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80332 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80332 /var/tmp/bdevperf.sock 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80332 ']' 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.450 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:33.824 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.824 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:18:33.824 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:33.824 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:34.390 Nvme0n1 00:18:34.390 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:34.647 Nvme0n1 00:18:34.647 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:34.647 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:35.581 14:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:35.581 14:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:35.841 14:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:36.115 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:36.115 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80383 00:18:36.115 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:36.115 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80276 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.675 Attaching 4 probes... 00:18:42.675 @path[10.0.0.2, 4421]: 17434 00:18:42.675 @path[10.0.0.2, 4421]: 17852 00:18:42.675 @path[10.0.0.2, 4421]: 17913 00:18:42.675 @path[10.0.0.2, 4421]: 17924 00:18:42.675 @path[10.0.0.2, 4421]: 17861 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80383 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:42.675 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:42.933 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:42.933 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:42.933 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80495 00:18:42.933 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:42.933 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80276 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:49.493 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:49.493 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.493 Attaching 4 probes... 00:18:49.493 @path[10.0.0.2, 4420]: 17340 00:18:49.493 @path[10.0.0.2, 4420]: 17722 00:18:49.493 @path[10.0.0.2, 4420]: 17928 00:18:49.493 @path[10.0.0.2, 4420]: 18092 00:18:49.493 @path[10.0.0.2, 4420]: 17920 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80495 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:49.493 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:49.751 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:49.751 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:49.751 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80608 00:18:49.751 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80276 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:49.751 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:56.314 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:56.314 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:56.314 Attaching 4 probes... 00:18:56.314 @path[10.0.0.2, 4421]: 13755 00:18:56.314 @path[10.0.0.2, 4421]: 18950 00:18:56.314 @path[10.0.0.2, 4421]: 18461 00:18:56.314 @path[10.0.0.2, 4421]: 17671 00:18:56.314 @path[10.0.0.2, 4421]: 17941 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80608 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:56.314 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:56.572 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:56.830 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:56.830 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80726 00:18:56.830 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80276 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:56.830 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:03.387 Attaching 4 probes... 00:19:03.387 00:19:03.387 00:19:03.387 00:19:03.387 00:19:03.387 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80726 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:03.387 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:03.387 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:03.645 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:03.645 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80838 00:19:03.645 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:03.645 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80276 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:10.274 Attaching 4 probes... 00:19:10.274 @path[10.0.0.2, 4421]: 16782 00:19:10.274 @path[10.0.0.2, 4421]: 17104 00:19:10.274 @path[10.0.0.2, 4421]: 17945 00:19:10.274 @path[10.0.0.2, 4421]: 17929 00:19:10.274 @path[10.0.0.2, 4421]: 17664 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80838 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:10.274 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:10.274 14:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:11.209 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:11.209 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80962 00:19:11.209 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:11.209 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80276 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:17.824 Attaching 4 probes... 00:19:17.824 @path[10.0.0.2, 4420]: 16732 00:19:17.824 @path[10.0.0.2, 4420]: 17065 00:19:17.824 @path[10.0.0.2, 4420]: 17128 00:19:17.824 @path[10.0.0.2, 4420]: 17367 00:19:17.824 @path[10.0.0.2, 4420]: 17357 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:17.824 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:17.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80962 00:19:17.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:17.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:17.825 [2024-07-25 14:02:06.645069] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:17.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:18.083 14:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:24.645 14:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:24.645 14:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81135 00:19:24.645 14:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80276 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:24.645 14:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:29.908 14:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:29.908 14:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:30.166 Attaching 4 probes... 00:19:30.166 @path[10.0.0.2, 4421]: 16479 00:19:30.166 @path[10.0.0.2, 4421]: 17159 00:19:30.166 @path[10.0.0.2, 4421]: 17185 00:19:30.166 @path[10.0.0.2, 4421]: 17158 00:19:30.166 @path[10.0.0.2, 4421]: 17071 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81135 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80332 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80332 ']' 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80332 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:19:30.166 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.167 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80332 00:19:30.434 killing process with pid 80332 00:19:30.434 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:30.434 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:30.434 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80332' 00:19:30.434 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80332 00:19:30.434 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80332 00:19:30.434 Connection closed with partial response: 00:19:30.434 00:19:30.434 00:19:30.434 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80332 00:19:30.434 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:30.434 [2024-07-25 14:01:21.434091] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:19:30.434 [2024-07-25 14:01:21.434225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80332 ] 00:19:30.434 [2024-07-25 14:01:21.576742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.434 [2024-07-25 14:01:21.712544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.434 [2024-07-25 14:01:21.772123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:30.434 Running I/O for 90 seconds... 00:19:30.434 [2024-07-25 14:01:31.928932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.434 [2024-07-25 14:01:31.929374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.434 [2024-07-25 14:01:31.929410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.434 [2024-07-25 14:01:31.929467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.434 [2024-07-25 14:01:31.929504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.434 [2024-07-25 14:01:31.929539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.434 [2024-07-25 14:01:31.929574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.434 [2024-07-25 14:01:31.929609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.434 [2024-07-25 14:01:31.929644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.929965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.929979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.930000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.930015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.930036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.930051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.930071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.930085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.930106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.434 [2024-07-25 14:01:31.930121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:30.434 [2024-07-25 14:01:31.930142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.930157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.930192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.930227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.930972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.930987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.931022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.931057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.931092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.435 [2024-07-25 14:01:31.931128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.435 [2024-07-25 14:01:31.931555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:30.435 [2024-07-25 14:01:31.931576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.931591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.931626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.931661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.931704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.931746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.931783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.931827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.931862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.931897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.931932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.931967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.931988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.436 [2024-07-25 14:01:31.932920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.932975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.932990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:30.436 [2024-07-25 14:01:31.933016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.436 [2024-07-25 14:01:31.933032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.933053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.933067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.933088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.933102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.933129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.933145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.933166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.933181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.933201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.933216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.933237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.933251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.933272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.933287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.934951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.934983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.935030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.935067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.935103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.935140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.437 [2024-07-25 14:01:31.935175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:31.935550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:31.935565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.512981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.512994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.513014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.513028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.513047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.513060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.513080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.437 [2024-07-25 14:01:38.513094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:30.437 [2024-07-25 14:01:38.513113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.513127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.513175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.513209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.513260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.438 [2024-07-25 14:01:38.513885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.513917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.513950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.513969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.513983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.514016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.514055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.514097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.514133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.514166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.514199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.514248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.438 [2024-07-25 14:01:38.514299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:30.438 [2024-07-25 14:01:38.514319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.514334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.514384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.514425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.514460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.514976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.514995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.515009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.515050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.439 [2024-07-25 14:01:38.515676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.515724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.515756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:30.439 [2024-07-25 14:01:38.515793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.439 [2024-07-25 14:01:38.515807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.515827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.515841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.515861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.515875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.515895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.515909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.515930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.515944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.515964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.515984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.516543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.516579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.516615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.516660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.516695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.516729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.516764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.516785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.516800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.440 [2024-07-25 14:01:38.517632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.517683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.517726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.517769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.517826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.517869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.517911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.517954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.517998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.518017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.518047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.518062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.518091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.518106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.518134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.518148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.518177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.518191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:30.440 [2024-07-25 14:01:38.518222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.440 [2024-07-25 14:01:38.518254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:38.518283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:38.518298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:38.518327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:38.518356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:38.518395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:38.518413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.642964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.642987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.643001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.643038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.441 [2024-07-25 14:01:45.643631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.643666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.643702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.643913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.643952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.643973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.441 [2024-07-25 14:01:45.643988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:30.441 [2024-07-25 14:01:45.644009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.644462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.644975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.644996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.645019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.645057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.645092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.442 [2024-07-25 14:01:45.645429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.645479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.442 [2024-07-25 14:01:45.645526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:30.442 [2024-07-25 14:01:45.645547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.645562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.645597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.645633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.645668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.645704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.645739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.645781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.645817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.645852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.645887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.645924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.645968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.645990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.646009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.646044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.646596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.646610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.647385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.443 [2024-07-25 14:01:45.647413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.647447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.647464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.647499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.647515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:30.443 [2024-07-25 14:01:45.647545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.443 [2024-07-25 14:01:45.647560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.647975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.647990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.648019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.648033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.648062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.648077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.648106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.648134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.648168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.648189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.648219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.648234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.648268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.648293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.648349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.648365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:45.648395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:45.648410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:59.100340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:59.100419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:59.100458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:59.100493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:59.100528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:59.100563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:59.100598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.444 [2024-07-25 14:01:59.100633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.100968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.100982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.101004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.101018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.101039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.101053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.101073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.101087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.101108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.101123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:30.444 [2024-07-25 14:01:59.101143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.444 [2024-07-25 14:01:59.101158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.101756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.101983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.101997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.102026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.102054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.102082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.102110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.102138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.102166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.102193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.445 [2024-07-25 14:01:59.102221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.102249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.102278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.102326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.102362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.445 [2024-07-25 14:01:59.102378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.445 [2024-07-25 14:01:59.102391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.102743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.102772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.102801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.102829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.102865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.102894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.102921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.102949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.102977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.102991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.103004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.103032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.103060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.103087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.103121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.103149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.446 [2024-07-25 14:01:59.103181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.103210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.103238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.103267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.103295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.103335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.103370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.446 [2024-07-25 14:01:59.103398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.446 [2024-07-25 14:01:59.103413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.447 [2024-07-25 14:01:59.103863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe546a0 is same with the state(5) to be set 00:19:30.447 [2024-07-25 14:01:59.103895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.103911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.103922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119336 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.103934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.103958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.103968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119728 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.103980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.103993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.104003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.104012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119736 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.104025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.104037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.104047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.104056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119744 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.104070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.104089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.104099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.104109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119752 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.104132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.104146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.104156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.104166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119760 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.104179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.104191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.104201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.104211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119768 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.104231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.104244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.104254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.104264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119776 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.104277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.104289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.104309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.447 [2024-07-25 14:01:59.104322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119784 len:8 PRP1 0x0 PRP2 0x0 00:19:30.447 [2024-07-25 14:01:59.104335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.447 [2024-07-25 14:01:59.104347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.447 [2024-07-25 14:01:59.104357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.448 [2024-07-25 14:01:59.104367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119792 len:8 PRP1 0x0 PRP2 0x0 00:19:30.448 [2024-07-25 14:01:59.104379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.104392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.448 [2024-07-25 14:01:59.104401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.448 [2024-07-25 14:01:59.104417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119800 len:8 PRP1 0x0 PRP2 0x0 00:19:30.448 [2024-07-25 14:01:59.104430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.104443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.448 [2024-07-25 14:01:59.104452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.448 [2024-07-25 14:01:59.104462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119808 len:8 PRP1 0x0 PRP2 0x0 00:19:30.448 [2024-07-25 14:01:59.104475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.104492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.448 [2024-07-25 14:01:59.104512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.448 [2024-07-25 14:01:59.104522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119816 len:8 PRP1 0x0 PRP2 0x0 00:19:30.448 [2024-07-25 14:01:59.104534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.104547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.448 [2024-07-25 14:01:59.104556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.448 [2024-07-25 14:01:59.104566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119824 len:8 PRP1 0x0 PRP2 0x0 00:19:30.448 [2024-07-25 14:01:59.104578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.104591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.448 [2024-07-25 14:01:59.104601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.448 [2024-07-25 14:01:59.104619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119832 len:8 PRP1 0x0 PRP2 0x0 00:19:30.448 [2024-07-25 14:01:59.104633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.104646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.448 [2024-07-25 14:01:59.104655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.448 [2024-07-25 14:01:59.104665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119840 len:8 PRP1 0x0 PRP2 0x0 00:19:30.448 [2024-07-25 14:01:59.104678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.104691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.448 [2024-07-25 14:01:59.104700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.448 [2024-07-25 14:01:59.104710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119848 len:8 PRP1 0x0 PRP2 0x0 00:19:30.448 [2024-07-25 14:01:59.104723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.104779] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe546a0 was disconnected and freed. reset controller. 00:19:30.448 [2024-07-25 14:01:59.105926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.448 [2024-07-25 14:01:59.106004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.448 [2024-07-25 14:01:59.106026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.448 [2024-07-25 14:01:59.106056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd6100 (9): Bad file descriptor 00:19:30.448 [2024-07-25 14:01:59.106480] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.448 [2024-07-25 14:01:59.106511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd6100 with addr=10.0.0.2, port=4421 00:19:30.448 [2024-07-25 14:01:59.106533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd6100 is same with the state(5) to be set 00:19:30.448 [2024-07-25 14:01:59.106567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd6100 (9): Bad file descriptor 00:19:30.448 [2024-07-25 14:01:59.106598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.448 [2024-07-25 14:01:59.106614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.448 [2024-07-25 14:01:59.106628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.448 [2024-07-25 14:01:59.106661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.448 [2024-07-25 14:01:59.106678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.448 [2024-07-25 14:02:09.180384] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:30.448 Received shutdown signal, test time was about 55.589246 seconds 00:19:30.448 00:19:30.448 Latency(us) 00:19:30.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.448 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:30.448 Verification LBA range: start 0x0 length 0x4000 00:19:30.448 Nvme0n1 : 55.59 7494.27 29.27 0.00 0.00 17045.19 223.42 7046430.72 00:19:30.448 =================================================================================================================== 00:19:30.448 Total : 7494.27 29.27 0.00 0.00 17045.19 223.42 7046430.72 00:19:30.448 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.706 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:30.706 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:30.706 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:30.706 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.706 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.965 rmmod nvme_tcp 00:19:30.965 rmmod nvme_fabrics 00:19:30.965 rmmod nvme_keyring 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80276 ']' 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80276 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80276 ']' 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80276 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80276 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80276' 00:19:30.965 killing process with pid 80276 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80276 00:19:30.965 14:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80276 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:31.223 00:19:31.223 real 1m2.184s 00:19:31.223 user 2m53.074s 00:19:31.223 sys 0m18.210s 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:31.223 ************************************ 00:19:31.223 END TEST nvmf_host_multipath 00:19:31.223 ************************************ 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:31.223 14:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.224 ************************************ 00:19:31.224 START TEST nvmf_timeout 00:19:31.224 ************************************ 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:31.224 * Looking for test storage... 00:19:31.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.224 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.482 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:31.483 Cannot find device "nvmf_tgt_br" 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.483 Cannot find device "nvmf_tgt_br2" 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:31.483 Cannot find device "nvmf_tgt_br" 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:31.483 Cannot find device "nvmf_tgt_br2" 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.483 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:31.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:31.742 00:19:31.742 --- 10.0.0.2 ping statistics --- 00:19:31.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.742 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:31.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:19:31.742 00:19:31.742 --- 10.0.0.3 ping statistics --- 00:19:31.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.742 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:31.742 00:19:31.742 --- 10.0.0.1 ping statistics --- 00:19:31.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.742 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81446 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81446 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81446 ']' 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.742 14:02:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:31.742 [2024-07-25 14:02:20.622358] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:19:31.742 [2024-07-25 14:02:20.622437] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.742 [2024-07-25 14:02:20.764682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:32.000 [2024-07-25 14:02:20.886453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.000 [2024-07-25 14:02:20.886532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.000 [2024-07-25 14:02:20.886547] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.000 [2024-07-25 14:02:20.886558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.000 [2024-07-25 14:02:20.886567] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.000 [2024-07-25 14:02:20.886728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.000 [2024-07-25 14:02:20.886742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.000 [2024-07-25 14:02:20.944370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:32.955 [2024-07-25 14:02:21.860755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.955 14:02:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:33.213 Malloc0 00:19:33.213 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:33.471 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:33.729 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:33.987 [2024-07-25 14:02:22.857413] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81495 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81495 /var/tmp/bdevperf.sock 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81495 ']' 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.987 14:02:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:33.987 [2024-07-25 14:02:22.927547] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:19:33.987 [2024-07-25 14:02:22.927647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81495 ] 00:19:34.245 [2024-07-25 14:02:23.062730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.245 [2024-07-25 14:02:23.168648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.245 [2024-07-25 14:02:23.222974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:34.502 14:02:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.502 14:02:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:34.502 14:02:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:34.760 14:02:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:35.018 NVMe0n1 00:19:35.018 14:02:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81510 00:19:35.018 14:02:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.018 14:02:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:35.018 Running I/O for 10 seconds... 00:19:35.953 14:02:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.215 [2024-07-25 14:02:25.131388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.215 [2024-07-25 14:02:25.131539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.131990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.216 [2024-07-25 14:02:25.132341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.132540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a750 is same with the state(5) to be set 00:19:36.217 [2024-07-25 14:02:25.133382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.133635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.133647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.134859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.134871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.217 [2024-07-25 14:02:25.135207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.217 [2024-07-25 14:02:25.135226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.135236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.135248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.135258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.135269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.135279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.135291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.135415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.135568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.135674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.135690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.135701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.135713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.135723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.135861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.136826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.136836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.137857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.137867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.138149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.138246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.138260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.138272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.138284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.218 [2024-07-25 14:02:25.138293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.218 [2024-07-25 14:02:25.138455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.138553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.138569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.138580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.138592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.138602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.138613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.138734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.138761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.138989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.139919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.139931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.140281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.140323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.140336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.140348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.140358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.140369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.140533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.140648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.140662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.140674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.140683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.140972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.140985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.140997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.141986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.141996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.142009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.142018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.219 [2024-07-25 14:02:25.142030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.219 [2024-07-25 14:02:25.142039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.142318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.142454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.142480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.142631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.142727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.142740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.142753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.142763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.142774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.143030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.143054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.143065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.143077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.143087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.143099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.143252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.143373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.143386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.143655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.143681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.143696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.143959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.143976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.143986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.143998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.144007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.144019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.144251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.144273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.144284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.144295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.144561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.144579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.144862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.144879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.220 [2024-07-25 14:02:25.144890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.144902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.144912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.145981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.145991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.220 [2024-07-25 14:02:25.146003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.220 [2024-07-25 14:02:25.146242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.146271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.221 [2024-07-25 14:02:25.146283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.146294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.221 [2024-07-25 14:02:25.146319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.146693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.221 [2024-07-25 14:02:25.146723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.146737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.221 [2024-07-25 14:02:25.146747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.146759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.221 [2024-07-25 14:02:25.146769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.146915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b1b0 is same with the state(5) to be set 00:19:36.221 [2024-07-25 14:02:25.147013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.221 [2024-07-25 14:02:25.147024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.221 [2024-07-25 14:02:25.147034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58528 len:8 PRP1 0x0 PRP2 0x0 00:19:36.221 [2024-07-25 14:02:25.147044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.147347] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d3b1b0 was disconnected and freed. reset controller. 00:19:36.221 [2024-07-25 14:02:25.147794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.221 [2024-07-25 14:02:25.147829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.147842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.221 [2024-07-25 14:02:25.147852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.147862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.221 [2024-07-25 14:02:25.147871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.147881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.221 [2024-07-25 14:02:25.148095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.221 [2024-07-25 14:02:25.148116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccad40 is same with the state(5) to be set 00:19:36.221 [2024-07-25 14:02:25.148527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.221 [2024-07-25 14:02:25.148571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccad40 (9): Bad file descriptor 00:19:36.221 [2024-07-25 14:02:25.148679] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.221 [2024-07-25 14:02:25.148938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccad40 with addr=10.0.0.2, port=4420 00:19:36.221 [2024-07-25 14:02:25.148971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccad40 is same with the state(5) to be set 00:19:36.221 [2024-07-25 14:02:25.148994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccad40 (9): Bad file descriptor 00:19:36.221 [2024-07-25 14:02:25.149012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:36.221 [2024-07-25 14:02:25.149022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:36.221 [2024-07-25 14:02:25.149033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:36.221 [2024-07-25 14:02:25.149253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:36.221 [2024-07-25 14:02:25.149283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.221 14:02:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:38.127 [2024-07-25 14:02:27.149706] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.127 [2024-07-25 14:02:27.149788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccad40 with addr=10.0.0.2, port=4420 00:19:38.127 [2024-07-25 14:02:27.149805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccad40 is same with the state(5) to be set 00:19:38.127 [2024-07-25 14:02:27.149835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccad40 (9): Bad file descriptor 00:19:38.127 [2024-07-25 14:02:27.149855] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.127 [2024-07-25 14:02:27.149866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:38.127 [2024-07-25 14:02:27.149878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.127 [2024-07-25 14:02:27.149907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.127 [2024-07-25 14:02:27.149919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.401 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:38.401 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:38.401 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:38.401 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:38.401 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:38.660 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:38.660 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:38.917 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:38.917 14:02:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:40.292 [2024-07-25 14:02:29.150081] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.292 [2024-07-25 14:02:29.150166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ccad40 with addr=10.0.0.2, port=4420 00:19:40.292 [2024-07-25 14:02:29.150183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccad40 is same with the state(5) to be set 00:19:40.292 [2024-07-25 14:02:29.150216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccad40 (9): Bad file descriptor 00:19:40.292 [2024-07-25 14:02:29.150236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.292 [2024-07-25 14:02:29.150247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.292 [2024-07-25 14:02:29.150260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.292 [2024-07-25 14:02:29.150290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.292 [2024-07-25 14:02:29.150317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.194 [2024-07-25 14:02:31.150365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:42.194 [2024-07-25 14:02:31.150423] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:42.194 [2024-07-25 14:02:31.150436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:42.194 [2024-07-25 14:02:31.150447] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:42.194 [2024-07-25 14:02:31.150477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:43.172 00:19:43.172 Latency(us) 00:19:43.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.172 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:43.172 Verification LBA range: start 0x0 length 0x4000 00:19:43.172 NVMe0n1 : 8.10 888.86 3.47 15.79 0.00 141499.50 4110.89 7046430.72 00:19:43.172 =================================================================================================================== 00:19:43.172 Total : 888.86 3.47 15.79 0.00 141499.50 4110.89 7046430.72 00:19:43.172 0 00:19:43.739 14:02:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:43.739 14:02:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:43.739 14:02:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:43.998 14:02:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:43.998 14:02:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:43.998 14:02:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:43.998 14:02:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81510 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81495 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81495 ']' 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81495 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81495 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:44.256 killing process with pid 81495 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81495' 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81495 00:19:44.256 Received shutdown signal, test time was about 9.210050 seconds 00:19:44.256 00:19:44.256 Latency(us) 00:19:44.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.256 =================================================================================================================== 00:19:44.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.256 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81495 00:19:44.515 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.773 [2024-07-25 14:02:33.732929] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.773 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81633 00:19:44.773 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:44.773 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81633 /var/tmp/bdevperf.sock 00:19:44.773 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81633 ']' 00:19:44.773 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.773 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.773 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.773 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.774 14:02:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:44.774 [2024-07-25 14:02:33.792185] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:19:44.774 [2024-07-25 14:02:33.792274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81633 ] 00:19:45.032 [2024-07-25 14:02:33.924583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.032 [2024-07-25 14:02:34.033274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.290 [2024-07-25 14:02:34.086546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:45.857 14:02:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.857 14:02:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:45.857 14:02:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:46.116 14:02:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:46.375 NVMe0n1 00:19:46.375 14:02:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81651 00:19:46.375 14:02:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:46.375 14:02:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:46.633 Running I/O for 10 seconds... 00:19:47.572 14:02:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.572 [2024-07-25 14:02:36.580832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.580998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.572 [2024-07-25 14:02:36.581541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.581845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868ca0 is same with the state(5) to be set 00:19:47.573 [2024-07-25 14:02:36.582755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.582990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.582999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.583863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.583878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.584120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.584147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.584158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.573 [2024-07-25 14:02:36.584170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.573 [2024-07-25 14:02:36.584179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.584429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.584442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.584454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.584566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.584579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.584590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.584875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.584946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.584960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.584969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.584981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.584991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.585969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.585980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.586266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.586384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.586400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.586410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.586421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.586431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.586675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.586699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.586712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.586822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.586839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.586849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.586861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.587786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.587798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.588008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.588031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.588041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.588053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.574 [2024-07-25 14:02:36.588063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.574 [2024-07-25 14:02:36.588075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.588989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.588999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.589020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.589388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.589417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.589437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.589712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.589735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.589756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.589981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.589997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.590785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.590796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.591709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.575 [2024-07-25 14:02:36.591977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.575 [2024-07-25 14:02:36.592000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.592906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.592917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.593059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.593172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.593184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.593195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.593204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.593216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.593225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.593344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.593355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.593366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.593617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.593644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.593775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.594808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.594943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.595072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.595207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.576 [2024-07-25 14:02:36.595229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.595479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.576 [2024-07-25 14:02:36.595492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.595503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14461b0 is same with the state(5) to be set 00:19:47.576 [2024-07-25 14:02:36.595517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:47.576 [2024-07-25 14:02:36.595525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:47.576 [2024-07-25 14:02:36.595534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:19:47.576 [2024-07-25 14:02:36.595543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.595850] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14461b0 was disconnected and freed. reset controller. 00:19:47.576 [2024-07-25 14:02:36.596267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.576 [2024-07-25 14:02:36.596295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.576 [2024-07-25 14:02:36.596322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.577 [2024-07-25 14:02:36.596332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.577 [2024-07-25 14:02:36.596343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.577 [2024-07-25 14:02:36.596352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.577 [2024-07-25 14:02:36.596361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.577 [2024-07-25 14:02:36.596370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.577 [2024-07-25 14:02:36.596379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5d40 is same with the state(5) to be set 00:19:47.577 [2024-07-25 14:02:36.597016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.577 [2024-07-25 14:02:36.597070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5d40 (9): Bad file descriptor 00:19:47.577 [2024-07-25 14:02:36.597501] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:47.577 [2024-07-25 14:02:36.597552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5d40 with addr=10.0.0.2, port=4420 00:19:47.577 [2024-07-25 14:02:36.597583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5d40 is same with the state(5) to be set 00:19:47.577 [2024-07-25 14:02:36.597872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5d40 (9): Bad file descriptor 00:19:47.577 [2024-07-25 14:02:36.597920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:47.577 [2024-07-25 14:02:36.597938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:47.577 [2024-07-25 14:02:36.597954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.577 [2024-07-25 14:02:36.598234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:47.577 [2024-07-25 14:02:36.598260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.836 14:02:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:48.772 [2024-07-25 14:02:37.598762] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.772 [2024-07-25 14:02:37.598864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5d40 with addr=10.0.0.2, port=4420 00:19:48.772 [2024-07-25 14:02:37.598881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5d40 is same with the state(5) to be set 00:19:48.772 [2024-07-25 14:02:37.598907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5d40 (9): Bad file descriptor 00:19:48.772 [2024-07-25 14:02:37.598927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:48.772 [2024-07-25 14:02:37.598937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:48.772 [2024-07-25 14:02:37.598949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:48.772 [2024-07-25 14:02:37.598978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:48.772 [2024-07-25 14:02:37.598990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:48.772 14:02:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.030 [2024-07-25 14:02:37.867728] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.030 14:02:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81651 00:19:49.596 [2024-07-25 14:02:38.619371] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:57.726 00:19:57.726 Latency(us) 00:19:57.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.726 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:57.726 Verification LBA range: start 0x0 length 0x4000 00:19:57.726 NVMe0n1 : 10.01 6240.98 24.38 0.00 0.00 20478.65 2219.29 3050402.91 00:19:57.726 =================================================================================================================== 00:19:57.726 Total : 6240.98 24.38 0.00 0.00 20478.65 2219.29 3050402.91 00:19:57.726 0 00:19:57.726 14:02:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81756 00:19:57.726 14:02:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:57.726 14:02:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:57.726 Running I/O for 10 seconds... 00:19:57.726 14:02:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.726 [2024-07-25 14:02:46.752311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.726 [2024-07-25 14:02:46.752380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.752682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.752692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.726 [2024-07-25 14:02:46.753976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.753994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.726 [2024-07-25 14:02:46.754005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.726 [2024-07-25 14:02:46.754026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.726 [2024-07-25 14:02:46.754048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.726 [2024-07-25 14:02:46.754069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.726 [2024-07-25 14:02:46.754089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.726 [2024-07-25 14:02:46.754110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445790 is same with the state(5) to be set 00:19:57.726 [2024-07-25 14:02:46.754454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.754463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.754472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65736 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.754482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.754500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.754508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65952 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.754517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.754535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.754543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65960 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.754552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.754575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.754582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65968 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.754591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.754600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.754607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.754895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65976 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.755004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.755017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.755025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.755034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65984 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.755043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.755150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.755159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.755168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65992 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.755177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.755187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.755316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.755329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66000 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.755338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.755349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.755356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.755465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66008 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.755477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.755488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.755495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.755504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66016 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.755513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.755522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.755639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.726 [2024-07-25 14:02:46.755651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66024 len:8 PRP1 0x0 PRP2 0x0 00:19:57.726 [2024-07-25 14:02:46.755865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.726 [2024-07-25 14:02:46.755891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.726 [2024-07-25 14:02:46.755900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.755908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66032 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.755919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.755929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.987 [2024-07-25 14:02:46.755937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.755945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66040 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.755954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.755963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.987 [2024-07-25 14:02:46.755971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.755978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66048 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.755987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.755996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.987 [2024-07-25 14:02:46.756003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.756253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66056 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.756407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.756513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.987 [2024-07-25 14:02:46.756524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.756533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66064 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.756543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.756553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.987 [2024-07-25 14:02:46.756561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.756568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66072 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.756577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.756586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.987 [2024-07-25 14:02:46.756594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.756602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66080 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.756610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.756620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.987 [2024-07-25 14:02:46.756627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.756635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66088 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.756998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.757011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.987 [2024-07-25 14:02:46.757020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.987 [2024-07-25 14:02:46.757028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66096 len:8 PRP1 0x0 PRP2 0x0 00:19:57.987 [2024-07-25 14:02:46.757153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.987 [2024-07-25 14:02:46.757168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.757176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.757184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66104 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.757193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.757395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.757414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.757423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66112 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.757433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.757443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.757451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.757459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66120 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.757468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.757479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.757486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.757495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66128 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.757504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.757514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.757521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.757530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66136 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.757538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.757548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.757555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.757563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66144 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.757815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.757954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.757965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66152 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66160 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66168 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66176 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66184 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66192 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66200 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66208 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66216 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.758831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.758841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.758849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.758988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66224 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.759143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.759250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.759260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.759269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66232 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.759278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.759288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.759317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.759328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66240 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.759337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.759347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.759354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.759362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66248 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.759371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.759381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.759389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.759498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66256 len:8 PRP1 0x0 PRP2 0x0 00:19:57.988 [2024-07-25 14:02:46.759510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.988 [2024-07-25 14:02:46.759520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.988 [2024-07-25 14:02:46.759528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.988 [2024-07-25 14:02:46.759536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66264 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.759545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.759660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.759668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.759677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66272 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.759686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.759695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.759702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.759961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66280 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.759972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.759983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.759990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.759999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66288 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.760008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.760192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.760492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.760594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66296 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.760606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.760617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.760625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.760633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66304 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.760643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.760652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.760659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.760667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66312 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.760676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.760686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.760965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.761032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66320 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.761042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.761054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.761062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.761070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66328 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.761079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.761089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.761096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.761104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66336 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.761112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.761121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.761129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.761136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66344 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.761144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.761154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.761549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.761672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66352 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.761686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.761829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.761841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.761849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66360 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.761981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.761994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.762119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.762133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66368 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.762143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.762266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.762368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.762383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66376 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.762392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.762403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.762411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.762420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66384 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.762430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.762574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.762585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.762594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66392 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.762724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.762811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.762820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.762828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66400 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.762837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.762847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.762855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.762863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66408 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.762872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.762881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.762889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.762897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66416 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.762906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.762915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.763036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.989 [2024-07-25 14:02:46.763048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66424 len:8 PRP1 0x0 PRP2 0x0 00:19:57.989 [2024-07-25 14:02:46.763169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.989 [2024-07-25 14:02:46.763184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.989 [2024-07-25 14:02:46.763192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.763200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66432 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.763326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.763345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.763352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.763496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66440 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.763599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.763619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.763628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.763636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66448 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.763645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.763655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.763662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.763670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66456 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.763679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.763688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.763696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.763960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66464 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.764030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.764043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.764051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.764059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66472 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.764068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.764078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.764085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.764094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66480 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.764102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.764112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.764119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.764155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66488 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.764465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.764614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.764748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66496 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.764761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.765000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.765009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.765018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66504 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.765028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.765038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.765046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.765053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66512 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.765169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.765186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.765326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.765435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66520 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.765448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.765458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.765467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.765475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66528 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.765484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.765494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.765502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.765709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66536 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.765731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.765743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.765751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.765760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66544 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.765769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.765778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.765786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.765794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66552 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.765802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.766009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.766028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.766039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66560 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.766048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.766059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.766067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.766075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66568 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.766091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.766101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.766108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.766363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66576 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.766376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.766387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.766395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.766403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66584 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.766412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.766422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.766429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.990 [2024-07-25 14:02:46.766569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66592 len:8 PRP1 0x0 PRP2 0x0 00:19:57.990 [2024-07-25 14:02:46.766682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.990 [2024-07-25 14:02:46.766695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.990 [2024-07-25 14:02:46.766703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.766711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66600 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.766720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.766978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.766998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.767007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66608 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.767017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.767027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.767035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.767043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66616 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.767052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.767061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.767068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.767209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66624 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.767346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.767360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.767632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.767652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66632 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.767668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.767680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.767688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.767696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66640 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.767705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.767714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.767811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.767829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66648 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.767839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.767849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.767975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.767987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66656 len:8 14:02:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:57.991 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.768113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.768135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.768265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.768277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66664 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.768402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.768416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.768504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.768517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66672 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.768526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.768538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.768545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.768554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66680 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.768563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.768573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.768580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.768596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66688 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.768605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.768615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.991 [2024-07-25 14:02:46.768623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.991 [2024-07-25 14:02:46.768631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66696 len:8 PRP1 0x0 PRP2 0x0 00:19:57.991 [2024-07-25 14:02:46.768645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.768731] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1445790 was disconnected and freed. reset controller. 00:19:57.991 [2024-07-25 14:02:46.769070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.991 [2024-07-25 14:02:46.769097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.769111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.991 [2024-07-25 14:02:46.769120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.769130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.991 [2024-07-25 14:02:46.769139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.769149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.991 [2024-07-25 14:02:46.769165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-07-25 14:02:46.769277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5d40 is same with the state(5) to be set 00:19:57.991 [2024-07-25 14:02:46.769757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.991 [2024-07-25 14:02:46.769796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5d40 (9): Bad file descriptor 00:19:57.991 [2024-07-25 14:02:46.770083] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.991 [2024-07-25 14:02:46.770116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5d40 with addr=10.0.0.2, port=4420 00:19:57.991 [2024-07-25 14:02:46.770129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5d40 is same with the state(5) to be set 00:19:57.991 [2024-07-25 14:02:46.770151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5d40 (9): Bad file descriptor 00:19:57.991 [2024-07-25 14:02:46.770168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.991 [2024-07-25 14:02:46.770429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.991 [2024-07-25 14:02:46.770443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.991 [2024-07-25 14:02:46.770468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.991 [2024-07-25 14:02:46.770481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.927 [2024-07-25 14:02:47.770734] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.927 [2024-07-25 14:02:47.770803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5d40 with addr=10.0.0.2, port=4420 00:19:58.927 [2024-07-25 14:02:47.770819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5d40 is same with the state(5) to be set 00:19:58.927 [2024-07-25 14:02:47.770845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5d40 (9): Bad file descriptor 00:19:58.927 [2024-07-25 14:02:47.770865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.927 [2024-07-25 14:02:47.770875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:58.927 [2024-07-25 14:02:47.770887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.927 [2024-07-25 14:02:47.770914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.927 [2024-07-25 14:02:47.770925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:59.863 [2024-07-25 14:02:48.771057] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.863 [2024-07-25 14:02:48.771132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5d40 with addr=10.0.0.2, port=4420 00:19:59.863 [2024-07-25 14:02:48.771149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5d40 is same with the state(5) to be set 00:19:59.863 [2024-07-25 14:02:48.771173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5d40 (9): Bad file descriptor 00:19:59.863 [2024-07-25 14:02:48.771192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:59.863 [2024-07-25 14:02:48.771202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:59.863 [2024-07-25 14:02:48.771223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:59.863 [2024-07-25 14:02:48.771250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.863 [2024-07-25 14:02:48.771262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.800 14:02:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.800 [2024-07-25 14:02:49.774715] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.800 [2024-07-25 14:02:49.774779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5d40 with addr=10.0.0.2, port=4420 00:20:00.800 [2024-07-25 14:02:49.774795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5d40 is same with the state(5) to be set 00:20:00.800 [2024-07-25 14:02:49.775255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5d40 (9): Bad file descriptor 00:20:00.800 [2024-07-25 14:02:49.775525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:00.800 [2024-07-25 14:02:49.775541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:00.800 [2024-07-25 14:02:49.775669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.800 [2024-07-25 14:02:49.779728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.800 [2024-07-25 14:02:49.779761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:01.058 [2024-07-25 14:02:50.013940] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.058 14:02:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81756 00:20:01.992 [2024-07-25 14:02:50.818029] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:07.281 00:20:07.281 Latency(us) 00:20:07.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.281 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:07.281 Verification LBA range: start 0x0 length 0x4000 00:20:07.281 NVMe0n1 : 10.01 5407.24 21.12 3691.75 0.00 14040.56 677.70 3035150.89 00:20:07.281 =================================================================================================================== 00:20:07.281 Total : 5407.24 21.12 3691.75 0.00 14040.56 0.00 3035150.89 00:20:07.281 0 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81633 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81633 ']' 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81633 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81633 00:20:07.281 killing process with pid 81633 00:20:07.281 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.281 00:20:07.281 Latency(us) 00:20:07.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.281 =================================================================================================================== 00:20:07.281 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81633' 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81633 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81633 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81876 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81876 /var/tmp/bdevperf.sock 00:20:07.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81876 ']' 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.281 14:02:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:07.281 [2024-07-25 14:02:55.932384] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:20:07.281 [2024-07-25 14:02:55.932691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81876 ] 00:20:07.281 [2024-07-25 14:02:56.066228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.281 [2024-07-25 14:02:56.166956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.281 [2024-07-25 14:02:56.220521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:08.216 14:02:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.216 14:02:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:08.216 14:02:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81892 00:20:08.216 14:02:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81876 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:08.217 14:02:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:08.217 14:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:08.475 NVMe0n1 00:20:08.476 14:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.476 14:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81928 00:20:08.476 14:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:08.734 Running I/O for 10 seconds... 00:20:09.680 14:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.942 [2024-07-25 14:02:58.765655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.765951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.765969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.765978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.765987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.765996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.766005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.766013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.766022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.766030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.766040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.942 [2024-07-25 14:02:58.766048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.943 [2024-07-25 14:02:58.766767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.766993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.767109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877790 is same with the state(5) to be set 00:20:09.944 [2024-07-25 14:02:58.770396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.770900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.770909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.771164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.771184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.771204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.771224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.771654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.771676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.771696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.944 [2024-07-25 14:02:58.771716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.944 [2024-07-25 14:02:58.771726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.771982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.771991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.945 [2024-07-25 14:02:58.772464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.945 [2024-07-25 14:02:58.772473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.772982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.772992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.946 [2024-07-25 14:02:58.773232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.946 [2024-07-25 14:02:58.773241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.773253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.773262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.773273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.773282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.774095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.774226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.774788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.775128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.775474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.775774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.776198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.776555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.776897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.777987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.777997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.778006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.778017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.778027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.947 [2024-07-25 14:02:58.778038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.947 [2024-07-25 14:02:58.778047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.948 [2024-07-25 14:02:58.778068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.948 [2024-07-25 14:02:58.778088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.948 [2024-07-25 14:02:58.778108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.948 [2024-07-25 14:02:58.778129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10706a0 is same with the state(5) to be set 00:20:09.948 [2024-07-25 14:02:58.778153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.948 [2024-07-25 14:02:58.778160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.948 [2024-07-25 14:02:58.778169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26936 len:8 PRP1 0x0 PRP2 0x0 00:20:09.948 [2024-07-25 14:02:58.778178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778232] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10706a0 was disconnected and freed. reset controller. 00:20:09.948 [2024-07-25 14:02:58.778343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.948 [2024-07-25 14:02:58.778361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.948 [2024-07-25 14:02:58.778382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.948 [2024-07-25 14:02:58.778400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.948 [2024-07-25 14:02:58.778419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.948 [2024-07-25 14:02:58.778428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101fc00 is same with the state(5) to be set 00:20:09.948 [2024-07-25 14:02:58.778676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.948 [2024-07-25 14:02:58.778700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101fc00 (9): Bad file descriptor 00:20:09.948 [2024-07-25 14:02:58.778802] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.948 [2024-07-25 14:02:58.778824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101fc00 with addr=10.0.0.2, port=4420 00:20:09.948 [2024-07-25 14:02:58.778834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101fc00 is same with the state(5) to be set 00:20:09.948 [2024-07-25 14:02:58.778852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101fc00 (9): Bad file descriptor 00:20:09.948 [2024-07-25 14:02:58.778869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.948 [2024-07-25 14:02:58.778878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.948 [2024-07-25 14:02:58.778889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.948 [2024-07-25 14:02:58.778910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.948 [2024-07-25 14:02:58.778920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.948 14:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81928 00:20:11.850 [2024-07-25 14:03:00.779272] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.850 [2024-07-25 14:03:00.779355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101fc00 with addr=10.0.0.2, port=4420 00:20:11.850 [2024-07-25 14:03:00.779373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101fc00 is same with the state(5) to be set 00:20:11.850 [2024-07-25 14:03:00.779400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101fc00 (9): Bad file descriptor 00:20:11.850 [2024-07-25 14:03:00.779434] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.850 [2024-07-25 14:03:00.779445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:11.850 [2024-07-25 14:03:00.779457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.850 [2024-07-25 14:03:00.779486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.850 [2024-07-25 14:03:00.779497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.751 [2024-07-25 14:03:02.779708] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.751 [2024-07-25 14:03:02.779777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101fc00 with addr=10.0.0.2, port=4420 00:20:13.751 [2024-07-25 14:03:02.779795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101fc00 is same with the state(5) to be set 00:20:13.751 [2024-07-25 14:03:02.779821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101fc00 (9): Bad file descriptor 00:20:13.751 [2024-07-25 14:03:02.779841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.751 [2024-07-25 14:03:02.779851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:13.751 [2024-07-25 14:03:02.779862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.751 [2024-07-25 14:03:02.779889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.751 [2024-07-25 14:03:02.779901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:16.280 [2024-07-25 14:03:04.780069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.281 [2024-07-25 14:03:04.780148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.281 [2024-07-25 14:03:04.780162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:16.281 [2024-07-25 14:03:04.780173] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:16.281 [2024-07-25 14:03:04.780206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:16.848 00:20:16.848 Latency(us) 00:20:16.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.848 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:16.848 NVMe0n1 : 8.16 2071.16 8.09 15.70 0.00 61397.58 8221.79 7046430.72 00:20:16.848 =================================================================================================================== 00:20:16.848 Total : 2071.16 8.09 15.70 0.00 61397.58 8221.79 7046430.72 00:20:16.848 0 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:16.848 Attaching 5 probes... 00:20:16.848 1332.065326: reset bdev controller NVMe0 00:20:16.848 1332.135416: reconnect bdev controller NVMe0 00:20:16.848 3332.516445: reconnect delay bdev controller NVMe0 00:20:16.848 3332.545117: reconnect bdev controller NVMe0 00:20:16.848 5332.971833: reconnect delay bdev controller NVMe0 00:20:16.848 5332.998283: reconnect bdev controller NVMe0 00:20:16.848 7333.433308: reconnect delay bdev controller NVMe0 00:20:16.848 7333.463017: reconnect bdev controller NVMe0 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81892 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81876 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81876 ']' 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81876 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81876 00:20:16.848 killing process with pid 81876 00:20:16.848 Received shutdown signal, test time was about 8.214420 seconds 00:20:16.848 00:20:16.848 Latency(us) 00:20:16.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.848 =================================================================================================================== 00:20:16.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81876' 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81876 00:20:16.848 14:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81876 00:20:17.107 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.365 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:17.365 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:17.365 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.365 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:17.365 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.365 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:17.365 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.365 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.365 rmmod nvme_tcp 00:20:17.365 rmmod nvme_fabrics 00:20:17.365 rmmod nvme_keyring 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81446 ']' 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81446 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81446 ']' 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81446 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.366 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81446 00:20:17.625 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:17.625 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:17.625 killing process with pid 81446 00:20:17.625 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81446' 00:20:17.625 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81446 00:20:17.625 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81446 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:17.884 00:20:17.884 real 0m46.521s 00:20:17.884 user 2m16.502s 00:20:17.884 sys 0m5.752s 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:17.884 ************************************ 00:20:17.884 END TEST nvmf_timeout 00:20:17.884 ************************************ 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:17.884 00:20:17.884 real 5m13.720s 00:20:17.884 user 13m43.099s 00:20:17.884 sys 1m10.693s 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:17.884 ************************************ 00:20:17.884 END TEST nvmf_host 00:20:17.884 14:03:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 ************************************ 00:20:17.884 00:20:17.884 real 12m32.638s 00:20:17.884 user 30m37.091s 00:20:17.884 sys 3m5.619s 00:20:17.884 14:03:06 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:17.884 ************************************ 00:20:17.884 14:03:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 END TEST nvmf_tcp 00:20:17.884 ************************************ 00:20:17.884 14:03:06 -- spdk/autotest.sh@292 -- # [[ 1 -eq 0 ]] 00:20:17.884 14:03:06 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:17.884 14:03:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:17.884 14:03:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:17.884 14:03:06 -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 ************************************ 00:20:17.884 START TEST nvmf_dif 00:20:17.884 ************************************ 00:20:17.884 14:03:06 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:17.884 * Looking for test storage... 00:20:17.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:17.884 14:03:06 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.884 14:03:06 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.885 14:03:06 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.885 14:03:06 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.885 14:03:06 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.885 14:03:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.885 14:03:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.885 14:03:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.885 14:03:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:17.885 14:03:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.885 14:03:06 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:17.885 14:03:06 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.885 14:03:06 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.885 14:03:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.885 14:03:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.885 14:03:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.885 14:03:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.143 14:03:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:18.143 14:03:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:18.143 14:03:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:18.143 14:03:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:18.143 14:03:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.143 14:03:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:18.143 14:03:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.143 14:03:06 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:18.144 Cannot find device "nvmf_tgt_br" 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.144 Cannot find device "nvmf_tgt_br2" 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:18.144 Cannot find device "nvmf_tgt_br" 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:18.144 Cannot find device "nvmf_tgt_br2" 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:18.144 14:03:06 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:18.144 14:03:07 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:18.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:18.403 00:20:18.403 --- 10.0.0.2 ping statistics --- 00:20:18.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.403 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:18.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:18.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:20:18.403 00:20:18.403 --- 10.0.0.3 ping statistics --- 00:20:18.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.403 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:18.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:18.403 00:20:18.403 --- 10.0.0.1 ping statistics --- 00:20:18.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.403 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:18.403 14:03:07 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:18.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:18.661 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:18.661 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.661 14:03:07 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:18.661 14:03:07 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.661 14:03:07 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:18.661 14:03:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82365 00:20:18.661 14:03:07 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82365 00:20:18.661 14:03:07 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 82365 ']' 00:20:18.661 14:03:07 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.661 14:03:07 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.661 14:03:07 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.661 14:03:07 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.661 14:03:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:18.919 [2024-07-25 14:03:07.741362] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:20:18.920 [2024-07-25 14:03:07.741445] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.920 [2024-07-25 14:03:07.884091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.178 [2024-07-25 14:03:08.006132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.178 [2024-07-25 14:03:08.006204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.178 [2024-07-25 14:03:08.006218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.178 [2024-07-25 14:03:08.006229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.178 [2024-07-25 14:03:08.006238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.178 [2024-07-25 14:03:08.006287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.178 [2024-07-25 14:03:08.065423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:20:19.178 14:03:08 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:19.178 14:03:08 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.178 14:03:08 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:19.178 14:03:08 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:19.178 [2024-07-25 14:03:08.173879] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.178 14:03:08 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:19.178 14:03:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:19.178 ************************************ 00:20:19.178 START TEST fio_dif_1_default 00:20:19.178 ************************************ 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.178 bdev_null0 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.178 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.437 [2024-07-25 14:03:08.218020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.437 { 00:20:19.437 "params": { 00:20:19.437 "name": "Nvme$subsystem", 00:20:19.437 "trtype": "$TEST_TRANSPORT", 00:20:19.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.437 "adrfam": "ipv4", 00:20:19.437 "trsvcid": "$NVMF_PORT", 00:20:19.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.437 "hdgst": ${hdgst:-false}, 00:20:19.437 "ddgst": ${ddgst:-false} 00:20:19.437 }, 00:20:19.437 "method": "bdev_nvme_attach_controller" 00:20:19.437 } 00:20:19.437 EOF 00:20:19.437 )") 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.437 "params": { 00:20:19.437 "name": "Nvme0", 00:20:19.437 "trtype": "tcp", 00:20:19.437 "traddr": "10.0.0.2", 00:20:19.437 "adrfam": "ipv4", 00:20:19.437 "trsvcid": "4420", 00:20:19.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.437 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.437 "hdgst": false, 00:20:19.437 "ddgst": false 00:20:19.437 }, 00:20:19.437 "method": "bdev_nvme_attach_controller" 00:20:19.437 }' 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:19.437 14:03:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.437 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:19.438 fio-3.35 00:20:19.438 Starting 1 thread 00:20:31.676 00:20:31.676 filename0: (groupid=0, jobs=1): err= 0: pid=82422: Thu Jul 25 14:03:18 2024 00:20:31.676 read: IOPS=8419, BW=32.9MiB/s (34.5MB/s)(329MiB/10001msec) 00:20:31.676 slat (nsec): min=6547, max=61225, avg=8461.07, stdev=2319.88 00:20:31.676 clat (usec): min=164, max=1609, avg=450.38, stdev=83.95 00:20:31.676 lat (usec): min=174, max=1643, avg=458.84, stdev=84.12 00:20:31.676 clat percentiles (usec): 00:20:31.676 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:20:31.676 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 445], 00:20:31.676 | 70.00th=[ 449], 80.00th=[ 461], 90.00th=[ 478], 95.00th=[ 523], 00:20:31.676 | 99.00th=[ 619], 99.50th=[ 1385], 99.90th=[ 1483], 99.95th=[ 1500], 00:20:31.676 | 99.99th=[ 1549] 00:20:31.676 bw ( KiB/s): min=26720, max=35200, per=99.91%, avg=33650.53, stdev=2088.72, samples=19 00:20:31.676 iops : min= 6680, max= 8800, avg=8412.63, stdev=522.18, samples=19 00:20:31.676 lat (usec) : 250=0.01%, 500=93.89%, 750=5.45%, 1000=0.10% 00:20:31.676 lat (msec) : 2=0.56% 00:20:31.676 cpu : usr=84.82%, sys=13.51%, ctx=25, majf=0, minf=0 00:20:31.676 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.676 issued rwts: total=84205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.676 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:31.676 00:20:31.676 Run status group 0 (all jobs): 00:20:31.676 READ: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=329MiB (345MB), run=10001-10001msec 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.676 00:20:31.676 real 0m11.025s 00:20:31.676 user 0m9.129s 00:20:31.676 sys 0m1.627s 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 ************************************ 00:20:31.676 END TEST fio_dif_1_default 00:20:31.676 ************************************ 00:20:31.676 14:03:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:31.676 14:03:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:31.676 14:03:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 ************************************ 00:20:31.676 START TEST fio_dif_1_multi_subsystems 00:20:31.676 ************************************ 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 bdev_null0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 [2024-07-25 14:03:19.293009] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 bdev_null1 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.677 { 00:20:31.677 "params": { 00:20:31.677 "name": "Nvme$subsystem", 00:20:31.677 "trtype": "$TEST_TRANSPORT", 00:20:31.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.677 "adrfam": "ipv4", 00:20:31.677 "trsvcid": "$NVMF_PORT", 00:20:31.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.677 "hdgst": ${hdgst:-false}, 00:20:31.677 "ddgst": ${ddgst:-false} 00:20:31.677 }, 00:20:31.677 "method": "bdev_nvme_attach_controller" 00:20:31.677 } 00:20:31.677 EOF 00:20:31.677 )") 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.677 { 00:20:31.677 "params": { 00:20:31.677 "name": "Nvme$subsystem", 00:20:31.677 "trtype": "$TEST_TRANSPORT", 00:20:31.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.677 "adrfam": "ipv4", 00:20:31.677 "trsvcid": "$NVMF_PORT", 00:20:31.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.677 "hdgst": ${hdgst:-false}, 00:20:31.677 "ddgst": ${ddgst:-false} 00:20:31.677 }, 00:20:31.677 "method": "bdev_nvme_attach_controller" 00:20:31.677 } 00:20:31.677 EOF 00:20:31.677 )") 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.677 "params": { 00:20:31.677 "name": "Nvme0", 00:20:31.677 "trtype": "tcp", 00:20:31.677 "traddr": "10.0.0.2", 00:20:31.677 "adrfam": "ipv4", 00:20:31.677 "trsvcid": "4420", 00:20:31.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:31.677 "hdgst": false, 00:20:31.677 "ddgst": false 00:20:31.677 }, 00:20:31.677 "method": "bdev_nvme_attach_controller" 00:20:31.677 },{ 00:20:31.677 "params": { 00:20:31.677 "name": "Nvme1", 00:20:31.677 "trtype": "tcp", 00:20:31.677 "traddr": "10.0.0.2", 00:20:31.677 "adrfam": "ipv4", 00:20:31.677 "trsvcid": "4420", 00:20:31.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.677 "hdgst": false, 00:20:31.677 "ddgst": false 00:20:31.677 }, 00:20:31.677 "method": "bdev_nvme_attach_controller" 00:20:31.677 }' 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:31.677 14:03:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.677 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:31.677 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:31.677 fio-3.35 00:20:31.677 Starting 2 threads 00:20:41.667 00:20:41.667 filename0: (groupid=0, jobs=1): err= 0: pid=82583: Thu Jul 25 14:03:30 2024 00:20:41.667 read: IOPS=4817, BW=18.8MiB/s (19.7MB/s)(188MiB/10001msec) 00:20:41.667 slat (nsec): min=7076, max=65098, avg=13117.96, stdev=3367.56 00:20:41.667 clat (usec): min=641, max=2777, avg=795.05, stdev=42.66 00:20:41.667 lat (usec): min=653, max=2813, avg=808.17, stdev=43.69 00:20:41.667 clat percentiles (usec): 00:20:41.667 | 1.00th=[ 701], 5.00th=[ 717], 10.00th=[ 742], 20.00th=[ 766], 00:20:41.667 | 30.00th=[ 783], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:20:41.667 | 70.00th=[ 816], 80.00th=[ 824], 90.00th=[ 840], 95.00th=[ 857], 00:20:41.667 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 947], 00:20:41.667 | 99.99th=[ 996] 00:20:41.667 bw ( KiB/s): min=18944, max=19520, per=50.01%, avg=19272.42, stdev=166.56, samples=19 00:20:41.667 iops : min= 4736, max= 4880, avg=4818.11, stdev=41.64, samples=19 00:20:41.667 lat (usec) : 750=13.04%, 1000=86.95% 00:20:41.667 lat (msec) : 4=0.01% 00:20:41.667 cpu : usr=89.86%, sys=8.71%, ctx=10, majf=0, minf=9 00:20:41.667 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.667 issued rwts: total=48176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.667 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:41.667 filename1: (groupid=0, jobs=1): err= 0: pid=82584: Thu Jul 25 14:03:30 2024 00:20:41.667 read: IOPS=4817, BW=18.8MiB/s (19.7MB/s)(188MiB/10001msec) 00:20:41.667 slat (nsec): min=6992, max=57958, avg=13325.52, stdev=3497.51 00:20:41.667 clat (usec): min=439, max=2656, avg=793.24, stdev=31.39 00:20:41.667 lat (usec): min=446, max=2687, avg=806.57, stdev=31.86 00:20:41.667 clat percentiles (usec): 00:20:41.667 | 1.00th=[ 742], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 775], 00:20:41.667 | 30.00th=[ 783], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:20:41.667 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 824], 95.00th=[ 840], 00:20:41.667 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 914], 99.95th=[ 947], 00:20:41.667 | 99.99th=[ 996] 00:20:41.667 bw ( KiB/s): min=18944, max=19520, per=50.01%, avg=19274.11, stdev=165.61, samples=19 00:20:41.667 iops : min= 4736, max= 4880, avg=4818.53, stdev=41.40, samples=19 00:20:41.667 lat (usec) : 500=0.01%, 750=3.67%, 1000=96.32% 00:20:41.667 lat (msec) : 4=0.01% 00:20:41.667 cpu : usr=89.68%, sys=8.88%, ctx=12, majf=0, minf=0 00:20:41.667 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.667 issued rwts: total=48180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.667 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:41.667 00:20:41.667 Run status group 0 (all jobs): 00:20:41.667 READ: bw=37.6MiB/s (39.5MB/s), 18.8MiB/s-18.8MiB/s (19.7MB/s-19.7MB/s), io=376MiB (395MB), run=10001-10001msec 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:41.667 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.668 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:41.668 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.668 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:41.668 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.668 00:20:41.668 real 0m11.136s 00:20:41.668 user 0m18.764s 00:20:41.668 sys 0m2.066s 00:20:41.668 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:41.668 ************************************ 00:20:41.668 14:03:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:41.668 END TEST fio_dif_1_multi_subsystems 00:20:41.668 ************************************ 00:20:41.668 14:03:30 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:41.668 14:03:30 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:41.668 14:03:30 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:41.668 14:03:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:41.668 ************************************ 00:20:41.668 START TEST fio_dif_rand_params 00:20:41.668 ************************************ 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.668 bdev_null0 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.668 [2024-07-25 14:03:30.476475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.668 { 00:20:41.668 "params": { 00:20:41.668 "name": "Nvme$subsystem", 00:20:41.668 "trtype": "$TEST_TRANSPORT", 00:20:41.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.668 "adrfam": "ipv4", 00:20:41.668 "trsvcid": "$NVMF_PORT", 00:20:41.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.668 "hdgst": ${hdgst:-false}, 00:20:41.668 "ddgst": ${ddgst:-false} 00:20:41.668 }, 00:20:41.668 "method": "bdev_nvme_attach_controller" 00:20:41.668 } 00:20:41.668 EOF 00:20:41.668 )") 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:41.668 "params": { 00:20:41.668 "name": "Nvme0", 00:20:41.668 "trtype": "tcp", 00:20:41.668 "traddr": "10.0.0.2", 00:20:41.668 "adrfam": "ipv4", 00:20:41.668 "trsvcid": "4420", 00:20:41.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:41.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:41.668 "hdgst": false, 00:20:41.668 "ddgst": false 00:20:41.668 }, 00:20:41.668 "method": "bdev_nvme_attach_controller" 00:20:41.668 }' 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.668 14:03:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.668 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:41.668 ... 00:20:41.668 fio-3.35 00:20:41.668 Starting 3 threads 00:20:48.235 00:20:48.235 filename0: (groupid=0, jobs=1): err= 0: pid=82740: Thu Jul 25 14:03:36 2024 00:20:48.235 read: IOPS=259, BW=32.5MiB/s (34.1MB/s)(163MiB/5008msec) 00:20:48.235 slat (nsec): min=7627, max=38461, avg=15160.12, stdev=4518.56 00:20:48.235 clat (usec): min=11337, max=15160, avg=11505.15, stdev=191.01 00:20:48.235 lat (usec): min=11345, max=15189, avg=11520.31, stdev=191.34 00:20:48.235 clat percentiles (usec): 00:20:48.235 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11469], 20.00th=[11469], 00:20:48.235 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:48.235 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11600], 00:20:48.235 | 99.00th=[11863], 99.50th=[11863], 99.90th=[15139], 99.95th=[15139], 00:20:48.235 | 99.99th=[15139] 00:20:48.235 bw ( KiB/s): min=33024, max=33792, per=33.31%, avg=33254.40, stdev=370.98, samples=10 00:20:48.235 iops : min= 258, max= 264, avg=259.80, stdev= 2.90, samples=10 00:20:48.235 lat (msec) : 20=100.00% 00:20:48.235 cpu : usr=91.25%, sys=8.03%, ctx=11, majf=0, minf=9 00:20:48.235 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.235 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:48.235 filename0: (groupid=0, jobs=1): err= 0: pid=82741: Thu Jul 25 14:03:36 2024 00:20:48.235 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5006msec) 00:20:48.235 slat (nsec): min=7710, max=47337, avg=15966.24, stdev=4209.86 00:20:48.235 clat (usec): min=11339, max=13083, avg=11498.48, stdev=104.45 00:20:48.235 lat (usec): min=11349, max=13109, avg=11514.45, stdev=104.71 00:20:48.235 clat percentiles (usec): 00:20:48.235 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:48.235 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:48.235 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11600], 00:20:48.235 | 99.00th=[11863], 99.50th=[11863], 99.90th=[13042], 99.95th=[13042], 00:20:48.235 | 99.99th=[13042] 00:20:48.235 bw ( KiB/s): min=33024, max=33792, per=33.31%, avg=33254.40, stdev=370.98, samples=10 00:20:48.235 iops : min= 258, max= 264, avg=259.80, stdev= 2.90, samples=10 00:20:48.235 lat (msec) : 20=100.00% 00:20:48.235 cpu : usr=90.73%, sys=8.67%, ctx=8, majf=0, minf=9 00:20:48.235 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.235 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:48.235 filename0: (groupid=0, jobs=1): err= 0: pid=82742: Thu Jul 25 14:03:36 2024 00:20:48.235 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5005msec) 00:20:48.235 slat (nsec): min=7722, max=44614, avg=15963.64, stdev=4111.88 00:20:48.235 clat (usec): min=11378, max=12130, avg=11495.39, stdev=74.08 00:20:48.235 lat (usec): min=11391, max=12155, avg=11511.35, stdev=74.42 00:20:48.235 clat percentiles (usec): 00:20:48.235 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:48.235 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:48.235 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11600], 00:20:48.235 | 99.00th=[11863], 99.50th=[11863], 99.90th=[12125], 99.95th=[12125], 00:20:48.235 | 99.99th=[12125] 00:20:48.235 bw ( KiB/s): min=33024, max=33792, per=33.31%, avg=33254.40, stdev=370.98, samples=10 00:20:48.235 iops : min= 258, max= 264, avg=259.80, stdev= 2.90, samples=10 00:20:48.235 lat (msec) : 20=100.00% 00:20:48.235 cpu : usr=91.13%, sys=8.25%, ctx=7, majf=0, minf=9 00:20:48.235 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.235 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:48.235 00:20:48.235 Run status group 0 (all jobs): 00:20:48.235 READ: bw=97.5MiB/s (102MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=488MiB (512MB), run=5005-5008msec 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:48.235 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 bdev_null0 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 [2024-07-25 14:03:36.459957] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 bdev_null1 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 bdev_null2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.236 { 00:20:48.236 "params": { 00:20:48.236 "name": "Nvme$subsystem", 00:20:48.236 "trtype": "$TEST_TRANSPORT", 00:20:48.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.236 "adrfam": "ipv4", 00:20:48.236 "trsvcid": "$NVMF_PORT", 00:20:48.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.236 "hdgst": ${hdgst:-false}, 00:20:48.236 "ddgst": ${ddgst:-false} 00:20:48.236 }, 00:20:48.236 "method": "bdev_nvme_attach_controller" 00:20:48.236 } 00:20:48.236 EOF 00:20:48.236 )") 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.236 { 00:20:48.236 "params": { 00:20:48.236 "name": "Nvme$subsystem", 00:20:48.236 "trtype": "$TEST_TRANSPORT", 00:20:48.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.236 "adrfam": "ipv4", 00:20:48.236 "trsvcid": "$NVMF_PORT", 00:20:48.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.236 "hdgst": ${hdgst:-false}, 00:20:48.236 "ddgst": ${ddgst:-false} 00:20:48.236 }, 00:20:48.236 "method": "bdev_nvme_attach_controller" 00:20:48.236 } 00:20:48.236 EOF 00:20:48.236 )") 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.236 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.236 { 00:20:48.236 "params": { 00:20:48.236 "name": "Nvme$subsystem", 00:20:48.236 "trtype": "$TEST_TRANSPORT", 00:20:48.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.236 "adrfam": "ipv4", 00:20:48.236 "trsvcid": "$NVMF_PORT", 00:20:48.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.237 "hdgst": ${hdgst:-false}, 00:20:48.237 "ddgst": ${ddgst:-false} 00:20:48.237 }, 00:20:48.237 "method": "bdev_nvme_attach_controller" 00:20:48.237 } 00:20:48.237 EOF 00:20:48.237 )") 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:48.237 "params": { 00:20:48.237 "name": "Nvme0", 00:20:48.237 "trtype": "tcp", 00:20:48.237 "traddr": "10.0.0.2", 00:20:48.237 "adrfam": "ipv4", 00:20:48.237 "trsvcid": "4420", 00:20:48.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:48.237 "hdgst": false, 00:20:48.237 "ddgst": false 00:20:48.237 }, 00:20:48.237 "method": "bdev_nvme_attach_controller" 00:20:48.237 },{ 00:20:48.237 "params": { 00:20:48.237 "name": "Nvme1", 00:20:48.237 "trtype": "tcp", 00:20:48.237 "traddr": "10.0.0.2", 00:20:48.237 "adrfam": "ipv4", 00:20:48.237 "trsvcid": "4420", 00:20:48.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.237 "hdgst": false, 00:20:48.237 "ddgst": false 00:20:48.237 }, 00:20:48.237 "method": "bdev_nvme_attach_controller" 00:20:48.237 },{ 00:20:48.237 "params": { 00:20:48.237 "name": "Nvme2", 00:20:48.237 "trtype": "tcp", 00:20:48.237 "traddr": "10.0.0.2", 00:20:48.237 "adrfam": "ipv4", 00:20:48.237 "trsvcid": "4420", 00:20:48.237 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:48.237 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:48.237 "hdgst": false, 00:20:48.237 "ddgst": false 00:20:48.237 }, 00:20:48.237 "method": "bdev_nvme_attach_controller" 00:20:48.237 }' 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:48.237 14:03:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.237 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:48.237 ... 00:20:48.237 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:48.237 ... 00:20:48.237 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:48.237 ... 00:20:48.237 fio-3.35 00:20:48.237 Starting 24 threads 00:21:00.437 00:21:00.437 filename0: (groupid=0, jobs=1): err= 0: pid=82838: Thu Jul 25 14:03:47 2024 00:21:00.437 read: IOPS=218, BW=874KiB/s (895kB/s)(8764KiB/10029msec) 00:21:00.437 slat (usec): min=5, max=8032, avg=25.77, stdev=256.89 00:21:00.437 clat (msec): min=30, max=143, avg=73.08, stdev=21.51 00:21:00.437 lat (msec): min=30, max=143, avg=73.10, stdev=21.50 00:21:00.437 clat percentiles (msec): 00:21:00.437 | 1.00th=[ 32], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 51], 00:21:00.437 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:21:00.437 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 113], 00:21:00.437 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.437 | 99.99th=[ 144] 00:21:00.437 bw ( KiB/s): min= 720, max= 1376, per=4.03%, avg=869.75, stdev=133.73, samples=20 00:21:00.437 iops : min= 180, max= 344, avg=217.40, stdev=33.46, samples=20 00:21:00.437 lat (msec) : 50=19.85%, 100=68.46%, 250=11.68% 00:21:00.437 cpu : usr=35.29%, sys=2.17%, ctx=989, majf=0, minf=9 00:21:00.437 IO depths : 1=0.2%, 2=1.4%, 4=4.9%, 8=78.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:00.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.437 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.437 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.437 filename0: (groupid=0, jobs=1): err= 0: pid=82839: Thu Jul 25 14:03:47 2024 00:21:00.437 read: IOPS=218, BW=874KiB/s (895kB/s)(8776KiB/10037msec) 00:21:00.437 slat (usec): min=7, max=8037, avg=28.02, stdev=296.38 00:21:00.437 clat (msec): min=15, max=143, avg=73.03, stdev=22.32 00:21:00.437 lat (msec): min=16, max=143, avg=73.06, stdev=22.33 00:21:00.437 clat percentiles (msec): 00:21:00.437 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 47], 20.00th=[ 52], 00:21:00.437 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:21:00.437 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 110], 00:21:00.437 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.437 | 99.99th=[ 144] 00:21:00.437 bw ( KiB/s): min= 672, max= 1632, per=4.04%, avg=871.20, stdev=191.54, samples=20 00:21:00.437 iops : min= 168, max= 408, avg=217.80, stdev=47.89, samples=20 00:21:00.437 lat (msec) : 20=0.64%, 50=17.73%, 100=69.87%, 250=11.76% 00:21:00.437 cpu : usr=31.57%, sys=2.06%, ctx=898, majf=0, minf=9 00:21:00.437 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:00.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.437 complete : 0=0.0%, 4=88.7%, 8=10.3%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.437 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.437 filename0: (groupid=0, jobs=1): err= 0: pid=82840: Thu Jul 25 14:03:47 2024 00:21:00.437 read: IOPS=234, BW=938KiB/s (961kB/s)(9392KiB/10010msec) 00:21:00.437 slat (usec): min=4, max=9035, avg=44.32, stdev=467.97 00:21:00.437 clat (msec): min=12, max=131, avg=68.02, stdev=21.23 00:21:00.437 lat (msec): min=12, max=131, avg=68.07, stdev=21.23 00:21:00.437 clat percentiles (msec): 00:21:00.437 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:21:00.437 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:21:00.437 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 108], 00:21:00.437 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.437 | 99.99th=[ 132] 00:21:00.437 bw ( KiB/s): min= 712, max= 1507, per=4.32%, avg=932.95, stdev=156.23, samples=20 00:21:00.437 iops : min= 178, max= 376, avg=233.20, stdev=38.91, samples=20 00:21:00.437 lat (msec) : 20=0.43%, 50=26.11%, 100=65.16%, 250=8.30% 00:21:00.437 cpu : usr=32.81%, sys=2.09%, ctx=940, majf=0, minf=9 00:21:00.437 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:00.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.437 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.437 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.437 filename0: (groupid=0, jobs=1): err= 0: pid=82841: Thu Jul 25 14:03:47 2024 00:21:00.437 read: IOPS=211, BW=846KiB/s (867kB/s)(8496KiB/10038msec) 00:21:00.437 slat (usec): min=7, max=8024, avg=23.32, stdev=195.24 00:21:00.437 clat (msec): min=15, max=154, avg=75.46, stdev=23.17 00:21:00.437 lat (msec): min=15, max=154, avg=75.49, stdev=23.17 00:21:00.437 clat percentiles (msec): 00:21:00.437 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:21:00.437 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:21:00.437 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:21:00.437 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 134], 99.95th=[ 153], 00:21:00.437 | 99.99th=[ 155] 00:21:00.438 bw ( KiB/s): min= 640, max= 1648, per=3.91%, avg=843.20, stdev=204.82, samples=20 00:21:00.438 iops : min= 160, max= 412, avg=210.80, stdev=51.21, samples=20 00:21:00.438 lat (msec) : 20=0.66%, 50=14.41%, 100=68.27%, 250=16.67% 00:21:00.438 cpu : usr=37.12%, sys=2.32%, ctx=1138, majf=0, minf=9 00:21:00.438 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=73.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:00.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 complete : 0=0.0%, 4=89.9%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.438 filename0: (groupid=0, jobs=1): err= 0: pid=82842: Thu Jul 25 14:03:47 2024 00:21:00.438 read: IOPS=236, BW=945KiB/s (968kB/s)(9516KiB/10066msec) 00:21:00.438 slat (usec): min=3, max=8026, avg=20.38, stdev=177.05 00:21:00.438 clat (usec): min=889, max=143962, avg=67482.05, stdev=28061.08 00:21:00.438 lat (usec): min=897, max=143977, avg=67502.43, stdev=28065.27 00:21:00.438 clat percentiles (usec): 00:21:00.438 | 1.00th=[ 1532], 5.00th=[ 3032], 10.00th=[ 28443], 20.00th=[ 47973], 00:21:00.438 | 30.00th=[ 58983], 40.00th=[ 67634], 50.00th=[ 71828], 60.00th=[ 73925], 00:21:00.438 | 70.00th=[ 81265], 80.00th=[ 85459], 90.00th=[104334], 95.00th=[108528], 00:21:00.438 | 99.00th=[121111], 99.50th=[131597], 99.90th=[135267], 99.95th=[143655], 00:21:00.438 | 99.99th=[143655] 00:21:00.438 bw ( KiB/s): min= 640, max= 2942, per=4.38%, avg=944.30, stdev=479.02, samples=20 00:21:00.438 iops : min= 160, max= 735, avg=236.05, stdev=119.65, samples=20 00:21:00.438 lat (usec) : 1000=0.08% 00:21:00.438 lat (msec) : 2=2.94%, 4=3.53%, 10=0.84%, 50=16.18%, 100=65.28% 00:21:00.438 lat (msec) : 250=11.14% 00:21:00.438 cpu : usr=35.96%, sys=2.24%, ctx=1268, majf=0, minf=0 00:21:00.438 IO depths : 1=0.3%, 2=1.3%, 4=4.0%, 8=78.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:00.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 issued rwts: total=2379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.438 filename0: (groupid=0, jobs=1): err= 0: pid=82843: Thu Jul 25 14:03:47 2024 00:21:00.438 read: IOPS=231, BW=926KiB/s (948kB/s)(9272KiB/10018msec) 00:21:00.438 slat (usec): min=4, max=8034, avg=30.95, stdev=333.48 00:21:00.438 clat (msec): min=14, max=131, avg=68.99, stdev=21.72 00:21:00.438 lat (msec): min=14, max=131, avg=69.02, stdev=21.71 00:21:00.438 clat percentiles (msec): 00:21:00.438 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:21:00.438 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:21:00.438 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 109], 00:21:00.438 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 130], 99.95th=[ 130], 00:21:00.438 | 99.99th=[ 132] 00:21:00.438 bw ( KiB/s): min= 608, max= 1552, per=4.27%, avg=920.80, stdev=170.99, samples=20 00:21:00.438 iops : min= 152, max= 388, avg=230.20, stdev=42.75, samples=20 00:21:00.438 lat (msec) : 20=0.56%, 50=23.68%, 100=67.00%, 250=8.76% 00:21:00.438 cpu : usr=33.64%, sys=2.09%, ctx=1052, majf=0, minf=9 00:21:00.438 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:00.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.438 filename0: (groupid=0, jobs=1): err= 0: pid=82844: Thu Jul 25 14:03:47 2024 00:21:00.438 read: IOPS=226, BW=905KiB/s (926kB/s)(9072KiB/10028msec) 00:21:00.438 slat (usec): min=6, max=8058, avg=28.14, stdev=274.00 00:21:00.438 clat (msec): min=21, max=142, avg=70.57, stdev=21.61 00:21:00.438 lat (msec): min=21, max=142, avg=70.60, stdev=21.61 00:21:00.438 clat percentiles (msec): 00:21:00.438 | 1.00th=[ 28], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 51], 00:21:00.438 | 30.00th=[ 56], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:21:00.438 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 103], 95.00th=[ 111], 00:21:00.438 | 99.00th=[ 120], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 142], 00:21:00.438 | 99.99th=[ 142] 00:21:00.438 bw ( KiB/s): min= 664, max= 1408, per=4.17%, avg=900.70, stdev=136.81, samples=20 00:21:00.438 iops : min= 166, max= 352, avg=225.15, stdev=34.20, samples=20 00:21:00.438 lat (msec) : 50=19.75%, 100=69.31%, 250=10.93% 00:21:00.438 cpu : usr=41.27%, sys=2.68%, ctx=1677, majf=0, minf=9 00:21:00.438 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=80.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:00.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.438 filename0: (groupid=0, jobs=1): err= 0: pid=82845: Thu Jul 25 14:03:47 2024 00:21:00.438 read: IOPS=217, BW=872KiB/s (893kB/s)(8728KiB/10011msec) 00:21:00.438 slat (nsec): min=4966, max=91112, avg=17551.27, stdev=7384.29 00:21:00.438 clat (msec): min=20, max=144, avg=73.32, stdev=22.32 00:21:00.438 lat (msec): min=20, max=144, avg=73.34, stdev=22.32 00:21:00.438 clat percentiles (msec): 00:21:00.438 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 50], 00:21:00.438 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:21:00.438 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 114], 00:21:00.438 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 144], 00:21:00.438 | 99.99th=[ 144] 00:21:00.438 bw ( KiB/s): min= 640, max= 1408, per=4.02%, avg=866.40, stdev=157.52, samples=20 00:21:00.438 iops : min= 160, max= 352, avg=216.60, stdev=39.38, samples=20 00:21:00.438 lat (msec) : 50=20.85%, 100=68.88%, 250=10.27% 00:21:00.438 cpu : usr=31.61%, sys=2.06%, ctx=896, majf=0, minf=9 00:21:00.438 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:21:00.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.438 filename1: (groupid=0, jobs=1): err= 0: pid=82846: Thu Jul 25 14:03:47 2024 00:21:00.438 read: IOPS=228, BW=914KiB/s (936kB/s)(9148KiB/10013msec) 00:21:00.438 slat (usec): min=6, max=8051, avg=28.65, stdev=302.41 00:21:00.438 clat (msec): min=14, max=132, avg=69.92, stdev=21.85 00:21:00.438 lat (msec): min=14, max=132, avg=69.94, stdev=21.85 00:21:00.438 clat percentiles (msec): 00:21:00.438 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 49], 00:21:00.438 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:21:00.438 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 109], 00:21:00.438 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 133], 99.95th=[ 133], 00:21:00.438 | 99.99th=[ 133] 00:21:00.438 bw ( KiB/s): min= 712, max= 1587, per=4.21%, avg=908.55, stdev=174.79, samples=20 00:21:00.438 iops : min= 178, max= 396, avg=227.10, stdev=43.55, samples=20 00:21:00.438 lat (msec) : 20=0.31%, 50=22.30%, 100=67.47%, 250=9.93% 00:21:00.438 cpu : usr=33.39%, sys=2.04%, ctx=995, majf=0, minf=9 00:21:00.438 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:00.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.438 filename1: (groupid=0, jobs=1): err= 0: pid=82847: Thu Jul 25 14:03:47 2024 00:21:00.438 read: IOPS=235, BW=943KiB/s (965kB/s)(9436KiB/10009msec) 00:21:00.438 slat (usec): min=4, max=8039, avg=29.69, stdev=330.00 00:21:00.438 clat (msec): min=12, max=128, avg=67.76, stdev=21.32 00:21:00.438 lat (msec): min=12, max=128, avg=67.79, stdev=21.31 00:21:00.438 clat percentiles (msec): 00:21:00.438 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:21:00.438 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:21:00.438 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 108], 00:21:00.438 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:21:00.438 | 99.99th=[ 129] 00:21:00.438 bw ( KiB/s): min= 712, max= 1592, per=4.34%, avg=937.20, stdev=172.10, samples=20 00:21:00.438 iops : min= 178, max= 398, avg=234.30, stdev=43.02, samples=20 00:21:00.438 lat (msec) : 20=0.51%, 50=26.20%, 100=64.69%, 250=8.61% 00:21:00.438 cpu : usr=33.49%, sys=1.96%, ctx=949, majf=0, minf=9 00:21:00.438 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:00.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.438 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.438 filename1: (groupid=0, jobs=1): err= 0: pid=82848: Thu Jul 25 14:03:47 2024 00:21:00.438 read: IOPS=232, BW=931KiB/s (953kB/s)(9312KiB/10002msec) 00:21:00.438 slat (usec): min=5, max=8024, avg=22.58, stdev=204.48 00:21:00.438 clat (msec): min=2, max=131, avg=68.64, stdev=21.03 00:21:00.438 lat (msec): min=2, max=131, avg=68.67, stdev=21.03 00:21:00.438 clat percentiles (msec): 00:21:00.438 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 48], 00:21:00.438 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:21:00.438 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:21:00.438 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.438 | 99.99th=[ 132] 00:21:00.438 bw ( KiB/s): min= 688, max= 1280, per=4.28%, avg=923.79, stdev=110.54, samples=19 00:21:00.438 iops : min= 172, max= 320, avg=230.95, stdev=27.64, samples=19 00:21:00.438 lat (msec) : 4=0.69%, 50=23.58%, 100=67.65%, 250=8.08% 00:21:00.438 cpu : usr=37.88%, sys=2.36%, ctx=1204, majf=0, minf=9 00:21:00.438 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:00.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 complete : 0=0.0%, 4=87.3%, 8=12.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.439 filename1: (groupid=0, jobs=1): err= 0: pid=82849: Thu Jul 25 14:03:47 2024 00:21:00.439 read: IOPS=237, BW=948KiB/s (971kB/s)(9484KiB/10002msec) 00:21:00.439 slat (usec): min=7, max=8034, avg=23.08, stdev=190.71 00:21:00.439 clat (usec): min=1629, max=123060, avg=67389.66, stdev=21926.59 00:21:00.439 lat (usec): min=1637, max=123108, avg=67412.74, stdev=21922.44 00:21:00.439 clat percentiles (msec): 00:21:00.439 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:21:00.439 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:21:00.439 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:21:00.439 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:21:00.439 | 99.99th=[ 124] 00:21:00.439 bw ( KiB/s): min= 712, max= 1472, per=4.34%, avg=936.42, stdev=146.56, samples=19 00:21:00.439 iops : min= 178, max= 368, avg=234.11, stdev=36.64, samples=19 00:21:00.439 lat (msec) : 2=0.25%, 4=0.80%, 10=0.13%, 20=0.42%, 50=26.53% 00:21:00.439 lat (msec) : 100=63.56%, 250=8.31% 00:21:00.439 cpu : usr=31.78%, sys=2.08%, ctx=941, majf=0, minf=9 00:21:00.439 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:00.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 issued rwts: total=2371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.439 filename1: (groupid=0, jobs=1): err= 0: pid=82850: Thu Jul 25 14:03:47 2024 00:21:00.439 read: IOPS=225, BW=901KiB/s (923kB/s)(9024KiB/10013msec) 00:21:00.439 slat (usec): min=4, max=8037, avg=32.00, stdev=347.61 00:21:00.439 clat (msec): min=20, max=132, avg=70.84, stdev=21.54 00:21:00.439 lat (msec): min=20, max=132, avg=70.88, stdev=21.55 00:21:00.439 clat percentiles (msec): 00:21:00.439 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 49], 00:21:00.439 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:21:00.439 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 109], 00:21:00.439 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.439 | 99.99th=[ 133] 00:21:00.439 bw ( KiB/s): min= 608, max= 1410, per=4.15%, avg=896.10, stdev=145.30, samples=20 00:21:00.439 iops : min= 152, max= 352, avg=224.00, stdev=36.23, samples=20 00:21:00.439 lat (msec) : 50=22.52%, 100=67.51%, 250=9.97% 00:21:00.439 cpu : usr=32.86%, sys=2.04%, ctx=948, majf=0, minf=9 00:21:00.439 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:00.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.439 filename1: (groupid=0, jobs=1): err= 0: pid=82851: Thu Jul 25 14:03:47 2024 00:21:00.439 read: IOPS=227, BW=909KiB/s (930kB/s)(9120KiB/10037msec) 00:21:00.439 slat (usec): min=7, max=7044, avg=23.45, stdev=204.90 00:21:00.439 clat (msec): min=13, max=135, avg=70.25, stdev=21.69 00:21:00.439 lat (msec): min=13, max=135, avg=70.28, stdev=21.69 00:21:00.439 clat percentiles (msec): 00:21:00.439 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 45], 20.00th=[ 50], 00:21:00.439 | 30.00th=[ 56], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:21:00.439 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 103], 95.00th=[ 110], 00:21:00.439 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 131], 00:21:00.439 | 99.99th=[ 136] 00:21:00.439 bw ( KiB/s): min= 664, max= 1608, per=4.20%, avg=905.60, stdev=185.28, samples=20 00:21:00.439 iops : min= 166, max= 402, avg=226.40, stdev=46.32, samples=20 00:21:00.439 lat (msec) : 20=0.70%, 50=20.22%, 100=68.03%, 250=11.05% 00:21:00.439 cpu : usr=43.69%, sys=2.46%, ctx=1371, majf=0, minf=9 00:21:00.439 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:00.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.439 filename1: (groupid=0, jobs=1): err= 0: pid=82852: Thu Jul 25 14:03:47 2024 00:21:00.439 read: IOPS=234, BW=936KiB/s (959kB/s)(9408KiB/10048msec) 00:21:00.439 slat (usec): min=4, max=4027, avg=20.92, stdev=165.38 00:21:00.439 clat (usec): min=1289, max=144023, avg=68145.91, stdev=24921.92 00:21:00.439 lat (usec): min=1300, max=144033, avg=68166.83, stdev=24923.50 00:21:00.439 clat percentiles (msec): 00:21:00.439 | 1.00th=[ 3], 5.00th=[ 24], 10.00th=[ 40], 20.00th=[ 48], 00:21:00.439 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 75], 00:21:00.439 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 104], 95.00th=[ 111], 00:21:00.439 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 138], 99.95th=[ 138], 00:21:00.439 | 99.99th=[ 144] 00:21:00.439 bw ( KiB/s): min= 640, max= 2296, per=4.34%, avg=936.80, stdev=329.40, samples=20 00:21:00.439 iops : min= 160, max= 574, avg=234.20, stdev=82.35, samples=20 00:21:00.439 lat (msec) : 2=0.09%, 4=1.87%, 10=0.77%, 20=1.23%, 50=19.22% 00:21:00.439 lat (msec) : 100=65.26%, 250=11.56% 00:21:00.439 cpu : usr=46.60%, sys=3.11%, ctx=1280, majf=0, minf=9 00:21:00.439 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:00.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.439 filename1: (groupid=0, jobs=1): err= 0: pid=82853: Thu Jul 25 14:03:47 2024 00:21:00.439 read: IOPS=232, BW=929KiB/s (952kB/s)(9304KiB/10010msec) 00:21:00.439 slat (usec): min=4, max=12025, avg=32.93, stdev=368.37 00:21:00.439 clat (msec): min=20, max=124, avg=68.71, stdev=20.80 00:21:00.439 lat (msec): min=20, max=124, avg=68.74, stdev=20.80 00:21:00.439 clat percentiles (msec): 00:21:00.439 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 49], 00:21:00.439 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 73], 00:21:00.439 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 110], 00:21:00.439 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:21:00.439 | 99.99th=[ 125] 00:21:00.439 bw ( KiB/s): min= 712, max= 1282, per=4.28%, avg=924.10, stdev=115.83, samples=20 00:21:00.439 iops : min= 178, max= 320, avg=231.00, stdev=28.88, samples=20 00:21:00.439 lat (msec) : 50=23.30%, 100=68.01%, 250=8.68% 00:21:00.439 cpu : usr=40.49%, sys=2.14%, ctx=1134, majf=0, minf=9 00:21:00.439 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:00.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 issued rwts: total=2326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.439 filename2: (groupid=0, jobs=1): err= 0: pid=82854: Thu Jul 25 14:03:47 2024 00:21:00.439 read: IOPS=224, BW=898KiB/s (920kB/s)(9012KiB/10033msec) 00:21:00.439 slat (usec): min=6, max=8040, avg=24.75, stdev=238.85 00:21:00.439 clat (msec): min=16, max=140, avg=71.10, stdev=21.52 00:21:00.439 lat (msec): min=16, max=140, avg=71.12, stdev=21.52 00:21:00.439 clat percentiles (msec): 00:21:00.439 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 50], 00:21:00.439 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:00.439 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 109], 00:21:00.439 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 130], 99.95th=[ 133], 00:21:00.439 | 99.99th=[ 140] 00:21:00.439 bw ( KiB/s): min= 664, max= 1592, per=4.14%, avg=894.80, stdev=184.36, samples=20 00:21:00.439 iops : min= 166, max= 398, avg=223.70, stdev=46.09, samples=20 00:21:00.439 lat (msec) : 20=0.84%, 50=20.24%, 100=68.80%, 250=10.12% 00:21:00.439 cpu : usr=31.54%, sys=2.05%, ctx=907, majf=0, minf=9 00:21:00.439 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:00.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.439 filename2: (groupid=0, jobs=1): err= 0: pid=82855: Thu Jul 25 14:03:47 2024 00:21:00.439 read: IOPS=230, BW=923KiB/s (945kB/s)(9240KiB/10014msec) 00:21:00.439 slat (usec): min=3, max=8057, avg=31.06, stdev=301.04 00:21:00.439 clat (msec): min=20, max=131, avg=69.20, stdev=20.66 00:21:00.439 lat (msec): min=20, max=131, avg=69.23, stdev=20.66 00:21:00.439 clat percentiles (msec): 00:21:00.439 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 50], 00:21:00.439 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 73], 00:21:00.439 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:21:00.439 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.439 | 99.99th=[ 132] 00:21:00.439 bw ( KiB/s): min= 688, max= 1392, per=4.25%, avg=917.60, stdev=131.87, samples=20 00:21:00.439 iops : min= 172, max= 348, avg=229.40, stdev=32.97, samples=20 00:21:00.439 lat (msec) : 50=23.55%, 100=67.66%, 250=8.79% 00:21:00.439 cpu : usr=39.37%, sys=2.32%, ctx=1301, majf=0, minf=9 00:21:00.439 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:00.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.439 issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.440 filename2: (groupid=0, jobs=1): err= 0: pid=82856: Thu Jul 25 14:03:47 2024 00:21:00.440 read: IOPS=211, BW=844KiB/s (865kB/s)(8476KiB/10038msec) 00:21:00.440 slat (usec): min=7, max=8024, avg=19.28, stdev=174.12 00:21:00.440 clat (msec): min=12, max=144, avg=75.65, stdev=22.93 00:21:00.440 lat (msec): min=12, max=144, avg=75.67, stdev=22.93 00:21:00.440 clat percentiles (msec): 00:21:00.440 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:21:00.440 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:21:00.440 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:21:00.440 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:21:00.440 | 99.99th=[ 144] 00:21:00.440 bw ( KiB/s): min= 608, max= 1536, per=3.90%, avg=841.20, stdev=187.08, samples=20 00:21:00.440 iops : min= 152, max= 384, avg=210.30, stdev=46.77, samples=20 00:21:00.440 lat (msec) : 20=0.76%, 50=15.53%, 100=70.79%, 250=12.93% 00:21:00.440 cpu : usr=33.63%, sys=2.09%, ctx=1038, majf=0, minf=9 00:21:00.440 IO depths : 1=0.1%, 2=1.9%, 4=7.8%, 8=74.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:00.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 issued rwts: total=2119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.440 filename2: (groupid=0, jobs=1): err= 0: pid=82857: Thu Jul 25 14:03:47 2024 00:21:00.440 read: IOPS=224, BW=897KiB/s (918kB/s)(9000KiB/10035msec) 00:21:00.440 slat (usec): min=5, max=9024, avg=29.91, stdev=327.80 00:21:00.440 clat (msec): min=16, max=143, avg=71.16, stdev=21.14 00:21:00.440 lat (msec): min=16, max=143, avg=71.19, stdev=21.14 00:21:00.440 clat percentiles (msec): 00:21:00.440 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 51], 00:21:00.440 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:21:00.440 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 109], 00:21:00.440 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.440 | 99.99th=[ 144] 00:21:00.440 bw ( KiB/s): min= 664, max= 1392, per=4.14%, avg=893.60, stdev=139.56, samples=20 00:21:00.440 iops : min= 166, max= 348, avg=223.40, stdev=34.89, samples=20 00:21:00.440 lat (msec) : 20=0.04%, 50=20.27%, 100=69.02%, 250=10.67% 00:21:00.440 cpu : usr=35.12%, sys=2.38%, ctx=1074, majf=0, minf=9 00:21:00.440 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:00.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.440 filename2: (groupid=0, jobs=1): err= 0: pid=82858: Thu Jul 25 14:03:47 2024 00:21:00.440 read: IOPS=220, BW=881KiB/s (902kB/s)(8844KiB/10039msec) 00:21:00.440 slat (usec): min=8, max=8039, avg=26.16, stdev=295.17 00:21:00.440 clat (msec): min=11, max=143, avg=72.49, stdev=22.79 00:21:00.440 lat (msec): min=11, max=143, avg=72.52, stdev=22.79 00:21:00.440 clat percentiles (msec): 00:21:00.440 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 50], 00:21:00.440 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:21:00.440 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 110], 00:21:00.440 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.440 | 99.99th=[ 144] 00:21:00.440 bw ( KiB/s): min= 592, max= 1536, per=4.07%, avg=878.00, stdev=172.97, samples=20 00:21:00.440 iops : min= 148, max= 384, avg=219.50, stdev=43.24, samples=20 00:21:00.440 lat (msec) : 20=0.72%, 50=20.40%, 100=65.67%, 250=13.21% 00:21:00.440 cpu : usr=31.19%, sys=1.96%, ctx=1040, majf=0, minf=9 00:21:00.440 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:00.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 complete : 0=0.0%, 4=88.7%, 8=10.2%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.440 filename2: (groupid=0, jobs=1): err= 0: pid=82859: Thu Jul 25 14:03:47 2024 00:21:00.440 read: IOPS=230, BW=922KiB/s (944kB/s)(9232KiB/10018msec) 00:21:00.440 slat (usec): min=4, max=12036, avg=41.03, stdev=440.80 00:21:00.440 clat (msec): min=16, max=156, avg=69.23, stdev=20.81 00:21:00.440 lat (msec): min=16, max=156, avg=69.27, stdev=20.80 00:21:00.440 clat percentiles (msec): 00:21:00.440 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 50], 00:21:00.440 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 73], 00:21:00.440 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 109], 00:21:00.440 | 99.00th=[ 120], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:21:00.440 | 99.99th=[ 157] 00:21:00.440 bw ( KiB/s): min= 664, max= 1346, per=4.25%, avg=916.90, stdev=128.76, samples=20 00:21:00.440 iops : min= 166, max= 336, avg=229.10, stdev=32.11, samples=20 00:21:00.440 lat (msec) : 20=0.56%, 50=21.01%, 100=68.67%, 250=9.75% 00:21:00.440 cpu : usr=41.11%, sys=2.43%, ctx=1327, majf=0, minf=9 00:21:00.440 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:00.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.440 filename2: (groupid=0, jobs=1): err= 0: pid=82860: Thu Jul 25 14:03:47 2024 00:21:00.440 read: IOPS=203, BW=814KiB/s (833kB/s)(8168KiB/10038msec) 00:21:00.440 slat (usec): min=7, max=8039, avg=23.47, stdev=251.02 00:21:00.440 clat (msec): min=13, max=158, avg=78.42, stdev=24.43 00:21:00.440 lat (msec): min=13, max=158, avg=78.44, stdev=24.43 00:21:00.440 clat percentiles (msec): 00:21:00.440 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 48], 20.00th=[ 64], 00:21:00.440 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:21:00.440 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 121], 00:21:00.440 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 155], 99.95th=[ 155], 00:21:00.440 | 99.99th=[ 159] 00:21:00.440 bw ( KiB/s): min= 528, max= 1776, per=3.76%, avg=810.40, stdev=247.06, samples=20 00:21:00.440 iops : min= 132, max= 444, avg=202.60, stdev=61.76, samples=20 00:21:00.440 lat (msec) : 20=0.78%, 50=12.19%, 100=71.01%, 250=16.01% 00:21:00.440 cpu : usr=42.58%, sys=3.05%, ctx=922, majf=0, minf=9 00:21:00.440 IO depths : 1=0.1%, 2=3.9%, 4=15.6%, 8=66.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:21:00.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 complete : 0=0.0%, 4=91.9%, 8=4.7%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 issued rwts: total=2042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.440 filename2: (groupid=0, jobs=1): err= 0: pid=82861: Thu Jul 25 14:03:47 2024 00:21:00.440 read: IOPS=221, BW=886KiB/s (907kB/s)(8888KiB/10034msec) 00:21:00.440 slat (usec): min=5, max=12030, avg=32.41, stdev=375.85 00:21:00.440 clat (msec): min=18, max=137, avg=72.08, stdev=20.76 00:21:00.440 lat (msec): min=18, max=137, avg=72.11, stdev=20.75 00:21:00.440 clat percentiles (msec): 00:21:00.440 | 1.00th=[ 32], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:21:00.440 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 77], 00:21:00.440 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 112], 00:21:00.440 | 99.00th=[ 120], 99.50th=[ 126], 99.90th=[ 131], 99.95th=[ 138], 00:21:00.440 | 99.99th=[ 138] 00:21:00.440 bw ( KiB/s): min= 632, max= 1314, per=4.09%, avg=882.50, stdev=137.60, samples=20 00:21:00.440 iops : min= 158, max= 328, avg=220.60, stdev=34.32, samples=20 00:21:00.440 lat (msec) : 20=0.09%, 50=18.41%, 100=69.58%, 250=11.93% 00:21:00.440 cpu : usr=44.18%, sys=2.54%, ctx=1395, majf=0, minf=9 00:21:00.440 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:00.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.440 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:00.440 00:21:00.440 Run status group 0 (all jobs): 00:21:00.440 READ: bw=21.1MiB/s (22.1MB/s), 814KiB/s-948KiB/s (833kB/s-971kB/s), io=212MiB (222MB), run=10002-10066msec 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.440 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 bdev_null0 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 [2024-07-25 14:03:47.868657] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 bdev_null1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.441 { 00:21:00.441 "params": { 00:21:00.441 "name": "Nvme$subsystem", 00:21:00.441 "trtype": "$TEST_TRANSPORT", 00:21:00.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.441 "adrfam": "ipv4", 00:21:00.441 "trsvcid": "$NVMF_PORT", 00:21:00.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.441 "hdgst": ${hdgst:-false}, 00:21:00.441 "ddgst": ${ddgst:-false} 00:21:00.441 }, 00:21:00.441 "method": "bdev_nvme_attach_controller" 00:21:00.441 } 00:21:00.441 EOF 00:21:00.441 )") 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.441 { 00:21:00.441 "params": { 00:21:00.441 "name": "Nvme$subsystem", 00:21:00.441 "trtype": "$TEST_TRANSPORT", 00:21:00.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.441 "adrfam": "ipv4", 00:21:00.441 "trsvcid": "$NVMF_PORT", 00:21:00.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.441 "hdgst": ${hdgst:-false}, 00:21:00.441 "ddgst": ${ddgst:-false} 00:21:00.441 }, 00:21:00.441 "method": "bdev_nvme_attach_controller" 00:21:00.441 } 00:21:00.441 EOF 00:21:00.441 )") 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:00.441 14:03:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:00.441 "params": { 00:21:00.441 "name": "Nvme0", 00:21:00.441 "trtype": "tcp", 00:21:00.441 "traddr": "10.0.0.2", 00:21:00.441 "adrfam": "ipv4", 00:21:00.441 "trsvcid": "4420", 00:21:00.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:00.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:00.442 "hdgst": false, 00:21:00.442 "ddgst": false 00:21:00.442 }, 00:21:00.442 "method": "bdev_nvme_attach_controller" 00:21:00.442 },{ 00:21:00.442 "params": { 00:21:00.442 "name": "Nvme1", 00:21:00.442 "trtype": "tcp", 00:21:00.442 "traddr": "10.0.0.2", 00:21:00.442 "adrfam": "ipv4", 00:21:00.442 "trsvcid": "4420", 00:21:00.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.442 "hdgst": false, 00:21:00.442 "ddgst": false 00:21:00.442 }, 00:21:00.442 "method": "bdev_nvme_attach_controller" 00:21:00.442 }' 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:00.442 14:03:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:00.442 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:00.442 ... 00:21:00.442 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:00.442 ... 00:21:00.442 fio-3.35 00:21:00.442 Starting 4 threads 00:21:05.704 00:21:05.704 filename0: (groupid=0, jobs=1): err= 0: pid=83011: Thu Jul 25 14:03:53 2024 00:21:05.704 read: IOPS=2154, BW=16.8MiB/s (17.6MB/s)(84.2MiB/5003msec) 00:21:05.704 slat (nsec): min=4132, max=79381, avg=16829.89, stdev=6321.59 00:21:05.704 clat (usec): min=1024, max=11399, avg=3669.69, stdev=983.51 00:21:05.704 lat (usec): min=1038, max=11413, avg=3686.52, stdev=983.05 00:21:05.704 clat percentiles (usec): 00:21:05.704 | 1.00th=[ 1893], 5.00th=[ 2212], 10.00th=[ 2606], 20.00th=[ 2737], 00:21:05.704 | 30.00th=[ 3195], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3556], 00:21:05.704 | 70.00th=[ 4146], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 5211], 00:21:05.704 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6915], 99.95th=[ 8094], 00:21:05.704 | 99.99th=[10814] 00:21:05.704 bw ( KiB/s): min=16016, max=18992, per=26.41%, avg=17130.44, stdev=1007.61, samples=9 00:21:05.704 iops : min= 2002, max= 2374, avg=2141.22, stdev=125.99, samples=9 00:21:05.704 lat (msec) : 2=3.01%, 4=64.67%, 10=32.30%, 20=0.02% 00:21:05.704 cpu : usr=92.22%, sys=6.62%, ctx=49, majf=0, minf=0 00:21:05.704 IO depths : 1=0.1%, 2=0.9%, 4=68.2%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.704 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.704 issued rwts: total=10779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:05.704 filename0: (groupid=0, jobs=1): err= 0: pid=83012: Thu Jul 25 14:03:53 2024 00:21:05.704 read: IOPS=2041, BW=16.0MiB/s (16.7MB/s)(79.8MiB/5002msec) 00:21:05.704 slat (nsec): min=7512, max=62513, avg=12712.97, stdev=6711.68 00:21:05.704 clat (usec): min=1027, max=14726, avg=3878.71, stdev=1015.05 00:21:05.704 lat (usec): min=1036, max=14741, avg=3891.43, stdev=1015.81 00:21:05.704 clat percentiles (usec): 00:21:05.704 | 1.00th=[ 1909], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2966], 00:21:05.704 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3490], 60.00th=[ 4015], 00:21:05.704 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:21:05.704 | 99.00th=[ 5538], 99.50th=[ 6194], 99.90th=[ 9765], 99.95th=[ 9765], 00:21:05.704 | 99.99th=[10028] 00:21:05.704 bw ( KiB/s): min=12544, max=19632, per=24.87%, avg=16128.00, stdev=2237.44, samples=9 00:21:05.704 iops : min= 1568, max= 2454, avg=2016.00, stdev=279.68, samples=9 00:21:05.704 lat (msec) : 2=3.09%, 4=56.65%, 10=40.22%, 20=0.04% 00:21:05.704 cpu : usr=91.90%, sys=7.14%, ctx=7, majf=0, minf=0 00:21:05.704 IO depths : 1=0.1%, 2=4.5%, 4=66.4%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.704 complete : 0=0.0%, 4=98.2%, 8=1.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.704 issued rwts: total=10214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:05.704 filename1: (groupid=0, jobs=1): err= 0: pid=83013: Thu Jul 25 14:03:53 2024 00:21:05.704 read: IOPS=1821, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5001msec) 00:21:05.704 slat (nsec): min=3968, max=62764, avg=16055.95, stdev=6130.41 00:21:05.704 clat (usec): min=1314, max=14977, avg=4345.71, stdev=1292.14 00:21:05.704 lat (usec): min=1322, max=14993, avg=4361.77, stdev=1290.34 00:21:05.704 clat percentiles (usec): 00:21:05.704 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2737], 20.00th=[ 3294], 00:21:05.704 | 30.00th=[ 3359], 40.00th=[ 3490], 50.00th=[ 4555], 60.00th=[ 4948], 00:21:05.704 | 70.00th=[ 5145], 80.00th=[ 5473], 90.00th=[ 6063], 95.00th=[ 6194], 00:21:05.704 | 99.00th=[ 6390], 99.50th=[ 6456], 99.90th=[10814], 99.95th=[13042], 00:21:05.704 | 99.99th=[15008] 00:21:05.704 bw ( KiB/s): min=11152, max=17136, per=23.17%, avg=15029.33, stdev=2340.75, samples=9 00:21:05.704 iops : min= 1394, max= 2142, avg=1878.56, stdev=292.53, samples=9 00:21:05.704 lat (msec) : 2=0.43%, 4=46.51%, 10=52.95%, 20=0.11% 00:21:05.704 cpu : usr=92.26%, sys=6.60%, ctx=18, majf=0, minf=9 00:21:05.704 IO depths : 1=0.1%, 2=7.2%, 4=62.5%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.704 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.704 issued rwts: total=9110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:05.704 filename1: (groupid=0, jobs=1): err= 0: pid=83014: Thu Jul 25 14:03:53 2024 00:21:05.704 read: IOPS=2090, BW=16.3MiB/s (17.1MB/s)(81.7MiB/5001msec) 00:21:05.704 slat (nsec): min=3942, max=55210, avg=15419.22, stdev=6022.98 00:21:05.704 clat (usec): min=311, max=11454, avg=3786.14, stdev=984.25 00:21:05.704 lat (usec): min=323, max=11463, avg=3801.56, stdev=984.01 00:21:05.704 clat percentiles (usec): 00:21:05.704 | 1.00th=[ 2180], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2769], 00:21:05.704 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3752], 00:21:05.704 | 70.00th=[ 4686], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5342], 00:21:05.704 | 99.00th=[ 5538], 99.50th=[ 5604], 99.90th=[ 6915], 99.95th=[ 8094], 00:21:05.704 | 99.99th=[10945] 00:21:05.704 bw ( KiB/s): min=13280, max=17984, per=25.53%, avg=16556.56, stdev=1399.21, samples=9 00:21:05.704 iops : min= 1660, max= 2248, avg=2069.56, stdev=174.90, samples=9 00:21:05.704 lat (usec) : 500=0.01%, 1000=0.03% 00:21:05.704 lat (msec) : 2=0.47%, 4=63.28%, 10=36.20%, 20=0.02% 00:21:05.704 cpu : usr=91.68%, sys=7.34%, ctx=6, majf=0, minf=10 00:21:05.704 IO depths : 1=0.1%, 2=1.9%, 4=67.2%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.704 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.704 issued rwts: total=10457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:05.704 00:21:05.704 Run status group 0 (all jobs): 00:21:05.704 READ: bw=63.3MiB/s (66.4MB/s), 14.2MiB/s-16.8MiB/s (14.9MB/s-17.6MB/s), io=317MiB (332MB), run=5001-5003msec 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.704 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.705 00:21:05.705 real 0m23.534s 00:21:05.705 user 2m2.513s 00:21:05.705 sys 0m8.975s 00:21:05.705 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:05.705 14:03:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.705 ************************************ 00:21:05.705 END TEST fio_dif_rand_params 00:21:05.705 ************************************ 00:21:05.705 14:03:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:05.705 14:03:54 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:05.705 14:03:54 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:05.705 14:03:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:05.705 ************************************ 00:21:05.705 START TEST fio_dif_digest 00:21:05.705 ************************************ 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.705 bdev_null0 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.705 [2024-07-25 14:03:54.069041] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.705 { 00:21:05.705 "params": { 00:21:05.705 "name": "Nvme$subsystem", 00:21:05.705 "trtype": "$TEST_TRANSPORT", 00:21:05.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.705 "adrfam": "ipv4", 00:21:05.705 "trsvcid": "$NVMF_PORT", 00:21:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.705 "hdgst": ${hdgst:-false}, 00:21:05.705 "ddgst": ${ddgst:-false} 00:21:05.705 }, 00:21:05.705 "method": "bdev_nvme_attach_controller" 00:21:05.705 } 00:21:05.705 EOF 00:21:05.705 )") 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:05.705 "params": { 00:21:05.705 "name": "Nvme0", 00:21:05.705 "trtype": "tcp", 00:21:05.705 "traddr": "10.0.0.2", 00:21:05.705 "adrfam": "ipv4", 00:21:05.705 "trsvcid": "4420", 00:21:05.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:05.705 "hdgst": true, 00:21:05.705 "ddgst": true 00:21:05.705 }, 00:21:05.705 "method": "bdev_nvme_attach_controller" 00:21:05.705 }' 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:05.705 14:03:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.705 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:05.705 ... 00:21:05.705 fio-3.35 00:21:05.705 Starting 3 threads 00:21:17.914 00:21:17.914 filename0: (groupid=0, jobs=1): err= 0: pid=83121: Thu Jul 25 14:04:04 2024 00:21:17.914 read: IOPS=218, BW=27.4MiB/s (28.7MB/s)(274MiB/10002msec) 00:21:17.914 slat (nsec): min=7233, max=67880, avg=13775.01, stdev=7367.56 00:21:17.914 clat (usec): min=9372, max=15777, avg=13665.49, stdev=257.35 00:21:17.914 lat (usec): min=9380, max=15800, avg=13679.27, stdev=257.82 00:21:17.914 clat percentiles (usec): 00:21:17.914 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13566], 20.00th=[13566], 00:21:17.914 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13698], 60.00th=[13698], 00:21:17.914 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:17.914 | 99.00th=[14615], 99.50th=[15270], 99.90th=[15795], 99.95th=[15795], 00:21:17.914 | 99.99th=[15795] 00:21:17.914 bw ( KiB/s): min=27648, max=28416, per=33.33%, avg=28011.79, stdev=393.98, samples=19 00:21:17.914 iops : min= 216, max= 222, avg=218.84, stdev= 3.08, samples=19 00:21:17.914 lat (msec) : 10=0.14%, 20=99.86% 00:21:17.914 cpu : usr=94.76%, sys=4.63%, ctx=23, majf=0, minf=0 00:21:17.914 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.914 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.914 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:17.914 filename0: (groupid=0, jobs=1): err= 0: pid=83122: Thu Jul 25 14:04:04 2024 00:21:17.914 read: IOPS=218, BW=27.4MiB/s (28.7MB/s)(274MiB/10005msec) 00:21:17.914 slat (nsec): min=6842, max=38164, avg=11155.76, stdev=3901.29 00:21:17.914 clat (usec): min=5292, max=22236, avg=13677.59, stdev=495.29 00:21:17.914 lat (usec): min=5301, max=22256, avg=13688.74, stdev=495.36 00:21:17.914 clat percentiles (usec): 00:21:17.914 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13566], 20.00th=[13566], 00:21:17.914 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13698], 60.00th=[13698], 00:21:17.914 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:17.915 | 99.00th=[14615], 99.50th=[15533], 99.90th=[22152], 99.95th=[22152], 00:21:17.915 | 99.99th=[22152] 00:21:17.915 bw ( KiB/s): min=26880, max=28416, per=33.28%, avg=27971.37, stdev=466.16, samples=19 00:21:17.915 iops : min= 210, max= 222, avg=218.53, stdev= 3.64, samples=19 00:21:17.915 lat (msec) : 10=0.14%, 20=99.73%, 50=0.14% 00:21:17.915 cpu : usr=95.07%, sys=4.36%, ctx=8, majf=0, minf=9 00:21:17.915 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.915 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.915 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:17.915 filename0: (groupid=0, jobs=1): err= 0: pid=83123: Thu Jul 25 14:04:04 2024 00:21:17.915 read: IOPS=218, BW=27.4MiB/s (28.7MB/s)(274MiB/10002msec) 00:21:17.915 slat (nsec): min=8125, max=67928, avg=16357.54, stdev=10413.04 00:21:17.915 clat (usec): min=10034, max=16130, avg=13655.62, stdev=264.26 00:21:17.915 lat (usec): min=10043, max=16163, avg=13671.98, stdev=264.64 00:21:17.915 clat percentiles (usec): 00:21:17.915 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13566], 20.00th=[13566], 00:21:17.915 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:21:17.915 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:17.915 | 99.00th=[14484], 99.50th=[15533], 99.90th=[16057], 99.95th=[16057], 00:21:17.915 | 99.99th=[16188] 00:21:17.915 bw ( KiB/s): min=27648, max=28416, per=33.33%, avg=28011.79, stdev=393.98, samples=19 00:21:17.915 iops : min= 216, max= 222, avg=218.84, stdev= 3.08, samples=19 00:21:17.915 lat (msec) : 20=100.00% 00:21:17.915 cpu : usr=93.98%, sys=5.37%, ctx=10, majf=0, minf=0 00:21:17.915 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.915 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.915 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:17.915 00:21:17.915 Run status group 0 (all jobs): 00:21:17.915 READ: bw=82.1MiB/s (86.1MB/s), 27.4MiB/s-27.4MiB/s (28.7MB/s-28.7MB/s), io=821MiB (861MB), run=10002-10005msec 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.915 00:21:17.915 real 0m11.013s 00:21:17.915 user 0m29.069s 00:21:17.915 sys 0m1.696s 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:17.915 14:04:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:17.915 ************************************ 00:21:17.915 END TEST fio_dif_digest 00:21:17.915 ************************************ 00:21:17.915 14:04:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:17.915 14:04:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.915 rmmod nvme_tcp 00:21:17.915 rmmod nvme_fabrics 00:21:17.915 rmmod nvme_keyring 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82365 ']' 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82365 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 82365 ']' 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 82365 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82365 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.915 killing process with pid 82365 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82365' 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@969 -- # kill 82365 00:21:17.915 14:04:05 nvmf_dif -- common/autotest_common.sh@974 -- # wait 82365 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:17.915 14:04:05 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:17.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:17.915 Waiting for block devices as requested 00:21:17.915 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:17.915 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:17.915 14:04:06 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:17.915 14:04:06 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:17.915 14:04:06 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.915 14:04:06 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:17.915 14:04:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.915 14:04:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:17.915 14:04:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.915 14:04:06 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:17.915 00:21:17.915 real 0m59.225s 00:21:17.915 user 3m46.853s 00:21:17.915 sys 0m19.566s 00:21:17.915 14:04:06 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:17.916 14:04:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:17.916 ************************************ 00:21:17.916 END TEST nvmf_dif 00:21:17.916 ************************************ 00:21:17.916 14:04:06 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:17.916 14:04:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:17.916 14:04:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:17.916 14:04:06 -- common/autotest_common.sh@10 -- # set +x 00:21:17.916 ************************************ 00:21:17.916 START TEST nvmf_abort_qd_sizes 00:21:17.916 ************************************ 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:17.916 * Looking for test storage... 00:21:17.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:17.916 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:17.917 Cannot find device "nvmf_tgt_br" 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.917 Cannot find device "nvmf_tgt_br2" 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:17.917 Cannot find device "nvmf_tgt_br" 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:17.917 Cannot find device "nvmf_tgt_br2" 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:17.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:21:17.917 00:21:17.917 --- 10.0.0.2 ping statistics --- 00:21:17.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.917 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:17.917 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:17.917 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:21:17.917 00:21:17.917 --- 10.0.0.3 ping statistics --- 00:21:17.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.917 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:17.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:21:17.917 00:21:17.917 --- 10.0.0.1 ping statistics --- 00:21:17.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.917 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:17.917 14:04:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:18.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.434 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:18.434 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83713 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83713 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 83713 ']' 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.434 14:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:18.434 [2024-07-25 14:04:07.449949] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:21:18.434 [2024-07-25 14:04:07.450055] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.692 [2024-07-25 14:04:07.597430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.951 [2024-07-25 14:04:07.727085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.951 [2024-07-25 14:04:07.727143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.951 [2024-07-25 14:04:07.727156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.951 [2024-07-25 14:04:07.727167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.951 [2024-07-25 14:04:07.727176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.951 [2024-07-25 14:04:07.727343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.951 [2024-07-25 14:04:07.727677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.951 [2024-07-25 14:04:07.727681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.951 [2024-07-25 14:04:07.727441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.951 [2024-07-25 14:04:07.784554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:19.517 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:19.518 14:04:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:19.518 ************************************ 00:21:19.518 START TEST spdk_target_abort 00:21:19.518 ************************************ 00:21:19.518 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:21:19.518 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:19.518 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:19.518 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.518 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.776 spdk_targetn1 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.776 [2024-07-25 14:04:08.624653] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.776 [2024-07-25 14:04:08.652814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:19.776 14:04:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:23.057 Initializing NVMe Controllers 00:21:23.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:23.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:23.057 Initialization complete. Launching workers. 00:21:23.057 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11423, failed: 0 00:21:23.057 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1022, failed to submit 10401 00:21:23.057 success 745, unsuccess 277, failed 0 00:21:23.057 14:04:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:23.057 14:04:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:26.340 Initializing NVMe Controllers 00:21:26.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:26.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:26.340 Initialization complete. Launching workers. 00:21:26.340 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8981, failed: 0 00:21:26.340 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1156, failed to submit 7825 00:21:26.340 success 411, unsuccess 745, failed 0 00:21:26.340 14:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:26.340 14:04:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:29.622 Initializing NVMe Controllers 00:21:29.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:29.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:29.622 Initialization complete. Launching workers. 00:21:29.622 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31892, failed: 0 00:21:29.622 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2345, failed to submit 29547 00:21:29.622 success 449, unsuccess 1896, failed 0 00:21:29.622 14:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:29.622 14:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.622 14:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:29.622 14:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.622 14:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:29.622 14:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.622 14:04:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83713 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 83713 ']' 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 83713 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83713 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:30.188 killing process with pid 83713 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83713' 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 83713 00:21:30.188 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 83713 00:21:30.447 00:21:30.447 real 0m10.765s 00:21:30.447 user 0m43.399s 00:21:30.447 sys 0m2.347s 00:21:30.447 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.447 ************************************ 00:21:30.447 END TEST spdk_target_abort 00:21:30.448 ************************************ 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.448 14:04:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:30.448 14:04:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:30.448 14:04:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:30.448 14:04:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:30.448 ************************************ 00:21:30.448 START TEST kernel_target_abort 00:21:30.448 ************************************ 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:30.448 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:30.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.706 Waiting for block devices as requested 00:21:30.965 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:30.965 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:30.965 No valid GPT data, bailing 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:30.965 14:04:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:31.224 No valid GPT data, bailing 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:31.224 No valid GPT data, bailing 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:31.224 No valid GPT data, bailing 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:31.224 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 --hostid=71427938-e211-49fa-b6ad-486cdab0bd89 -a 10.0.0.1 -t tcp -s 4420 00:21:31.483 00:21:31.483 Discovery Log Number of Records 2, Generation counter 2 00:21:31.483 =====Discovery Log Entry 0====== 00:21:31.483 trtype: tcp 00:21:31.483 adrfam: ipv4 00:21:31.483 subtype: current discovery subsystem 00:21:31.483 treq: not specified, sq flow control disable supported 00:21:31.483 portid: 1 00:21:31.483 trsvcid: 4420 00:21:31.483 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:31.483 traddr: 10.0.0.1 00:21:31.483 eflags: none 00:21:31.483 sectype: none 00:21:31.483 =====Discovery Log Entry 1====== 00:21:31.483 trtype: tcp 00:21:31.483 adrfam: ipv4 00:21:31.483 subtype: nvme subsystem 00:21:31.483 treq: not specified, sq flow control disable supported 00:21:31.483 portid: 1 00:21:31.483 trsvcid: 4420 00:21:31.483 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:31.483 traddr: 10.0.0.1 00:21:31.483 eflags: none 00:21:31.483 sectype: none 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:31.483 14:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:34.770 Initializing NVMe Controllers 00:21:34.770 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:34.770 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:34.770 Initialization complete. Launching workers. 00:21:34.770 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33233, failed: 0 00:21:34.770 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33233, failed to submit 0 00:21:34.770 success 0, unsuccess 33233, failed 0 00:21:34.770 14:04:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:34.770 14:04:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:38.084 Initializing NVMe Controllers 00:21:38.084 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:38.084 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:38.084 Initialization complete. Launching workers. 00:21:38.084 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68173, failed: 0 00:21:38.084 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29740, failed to submit 38433 00:21:38.084 success 0, unsuccess 29740, failed 0 00:21:38.084 14:04:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:38.084 14:04:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:41.371 Initializing NVMe Controllers 00:21:41.372 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:41.372 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:41.372 Initialization complete. Launching workers. 00:21:41.372 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80581, failed: 0 00:21:41.372 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20148, failed to submit 60433 00:21:41.372 success 0, unsuccess 20148, failed 0 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:41.372 14:04:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:41.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:43.573 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:43.573 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:43.836 00:21:43.836 real 0m13.247s 00:21:43.836 user 0m6.321s 00:21:43.836 sys 0m4.331s 00:21:43.836 14:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:43.836 14:04:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:43.836 ************************************ 00:21:43.836 END TEST kernel_target_abort 00:21:43.836 ************************************ 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.836 rmmod nvme_tcp 00:21:43.836 rmmod nvme_fabrics 00:21:43.836 rmmod nvme_keyring 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83713 ']' 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83713 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 83713 ']' 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 83713 00:21:43.836 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83713) - No such process 00:21:43.836 Process with pid 83713 is not found 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 83713 is not found' 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:43.836 14:04:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:44.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:44.094 Waiting for block devices as requested 00:21:44.352 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:44.352 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:44.352 00:21:44.352 real 0m27.276s 00:21:44.352 user 0m50.939s 00:21:44.352 sys 0m8.035s 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:44.352 ************************************ 00:21:44.352 END TEST nvmf_abort_qd_sizes 00:21:44.352 14:04:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:44.352 ************************************ 00:21:44.609 14:04:33 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:44.609 14:04:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:44.609 14:04:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:44.609 14:04:33 -- common/autotest_common.sh@10 -- # set +x 00:21:44.609 ************************************ 00:21:44.609 START TEST keyring_file 00:21:44.609 ************************************ 00:21:44.610 14:04:33 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:44.610 * Looking for test storage... 00:21:44.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.610 14:04:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.610 14:04:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.610 14:04:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.610 14:04:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.610 14:04:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.610 14:04:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.610 14:04:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:44.610 14:04:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tupJwTFsYt 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tupJwTFsYt 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tupJwTFsYt 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tupJwTFsYt 00:21:44.610 14:04:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.P3JGMZHH44 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:44.610 14:04:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:44.610 14:04:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.P3JGMZHH44 00:21:44.868 14:04:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.P3JGMZHH44 00:21:44.868 14:04:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.P3JGMZHH44 00:21:44.868 14:04:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=84585 00:21:44.868 14:04:33 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.868 14:04:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84585 00:21:44.868 14:04:33 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84585 ']' 00:21:44.868 14:04:33 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.868 14:04:33 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.868 14:04:33 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.868 14:04:33 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.868 14:04:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:44.868 [2024-07-25 14:04:33.708370] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:21:44.868 [2024-07-25 14:04:33.708468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84585 ] 00:21:44.868 [2024-07-25 14:04:33.843687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.126 [2024-07-25 14:04:33.976478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.126 [2024-07-25 14:04:34.037148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:45.693 14:04:34 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.693 14:04:34 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:45.693 14:04:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:45.693 14:04:34 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.693 14:04:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:45.693 [2024-07-25 14:04:34.678735] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.693 null0 00:21:45.693 [2024-07-25 14:04:34.710974] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.693 [2024-07-25 14:04:34.711272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:45.693 [2024-07-25 14:04:34.718912] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:45.693 14:04:34 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.951 14:04:34 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:45.951 14:04:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:45.951 14:04:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:45.951 14:04:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:45.951 14:04:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.951 14:04:34 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:45.951 14:04:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:45.952 [2024-07-25 14:04:34.730906] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:45.952 request: 00:21:45.952 { 00:21:45.952 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.952 "secure_channel": false, 00:21:45.952 "listen_address": { 00:21:45.952 "trtype": "tcp", 00:21:45.952 "traddr": "127.0.0.1", 00:21:45.952 "trsvcid": "4420" 00:21:45.952 }, 00:21:45.952 "method": "nvmf_subsystem_add_listener", 00:21:45.952 "req_id": 1 00:21:45.952 } 00:21:45.952 Got JSON-RPC error response 00:21:45.952 response: 00:21:45.952 { 00:21:45.952 "code": -32602, 00:21:45.952 "message": "Invalid parameters" 00:21:45.952 } 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:45.952 14:04:34 keyring_file -- keyring/file.sh@46 -- # bperfpid=84598 00:21:45.952 14:04:34 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:45.952 14:04:34 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84598 /var/tmp/bperf.sock 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84598 ']' 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.952 14:04:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:45.952 [2024-07-25 14:04:34.787426] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:21:45.952 [2024-07-25 14:04:34.787504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84598 ] 00:21:45.952 [2024-07-25 14:04:34.924116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.210 [2024-07-25 14:04:35.048980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.210 [2024-07-25 14:04:35.104894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:46.777 14:04:35 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.777 14:04:35 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:46.777 14:04:35 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tupJwTFsYt 00:21:46.777 14:04:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tupJwTFsYt 00:21:47.035 14:04:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P3JGMZHH44 00:21:47.035 14:04:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P3JGMZHH44 00:21:47.293 14:04:36 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:47.293 14:04:36 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:47.293 14:04:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.293 14:04:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.293 14:04:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.551 14:04:36 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.tupJwTFsYt == \/\t\m\p\/\t\m\p\.\t\u\p\J\w\T\F\s\Y\t ]] 00:21:47.551 14:04:36 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:47.551 14:04:36 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:47.551 14:04:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.551 14:04:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:47.551 14:04:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.809 14:04:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.P3JGMZHH44 == \/\t\m\p\/\t\m\p\.\P\3\J\G\M\Z\H\H\4\4 ]] 00:21:47.809 14:04:36 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:47.809 14:04:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.809 14:04:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:47.809 14:04:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.809 14:04:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.809 14:04:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:48.098 14:04:36 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:48.098 14:04:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:48.098 14:04:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:48.098 14:04:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:48.098 14:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.098 14:04:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.098 14:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:48.356 14:04:37 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:48.356 14:04:37 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:48.356 14:04:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:48.615 [2024-07-25 14:04:37.451472] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.615 nvme0n1 00:21:48.615 14:04:37 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:48.615 14:04:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:48.615 14:04:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:48.615 14:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.615 14:04:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.615 14:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:48.874 14:04:37 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:48.874 14:04:37 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:48.874 14:04:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:48.874 14:04:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:48.874 14:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:48.874 14:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.874 14:04:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.132 14:04:38 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:49.132 14:04:38 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:49.132 Running I/O for 1 seconds... 00:21:50.504 00:21:50.504 Latency(us) 00:21:50.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.504 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:50.504 nvme0n1 : 1.01 11370.57 44.42 0.00 0.00 11216.77 5838.66 21567.30 00:21:50.504 =================================================================================================================== 00:21:50.504 Total : 11370.57 44.42 0.00 0.00 11216.77 5838.66 21567.30 00:21:50.504 0 00:21:50.504 14:04:39 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:50.504 14:04:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:50.504 14:04:39 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:50.504 14:04:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:50.504 14:04:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:50.504 14:04:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:50.504 14:04:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.504 14:04:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.762 14:04:39 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:50.762 14:04:39 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:50.762 14:04:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:50.762 14:04:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:50.762 14:04:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.762 14:04:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:50.762 14:04:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.328 14:04:40 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:51.328 14:04:40 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:51.328 14:04:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:51.328 [2024-07-25 14:04:40.326279] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:51.328 [2024-07-25 14:04:40.326937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc434f0 (107): Transport endpoint is not connected 00:21:51.328 [2024-07-25 14:04:40.327928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc434f0 (9): Bad file descriptor 00:21:51.328 [2024-07-25 14:04:40.328924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.328 [2024-07-25 14:04:40.328950] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:51.328 [2024-07-25 14:04:40.328961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.328 request: 00:21:51.328 { 00:21:51.328 "name": "nvme0", 00:21:51.328 "trtype": "tcp", 00:21:51.328 "traddr": "127.0.0.1", 00:21:51.328 "adrfam": "ipv4", 00:21:51.328 "trsvcid": "4420", 00:21:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:51.328 "prchk_reftag": false, 00:21:51.328 "prchk_guard": false, 00:21:51.328 "hdgst": false, 00:21:51.328 "ddgst": false, 00:21:51.328 "psk": "key1", 00:21:51.328 "method": "bdev_nvme_attach_controller", 00:21:51.328 "req_id": 1 00:21:51.328 } 00:21:51.328 Got JSON-RPC error response 00:21:51.328 response: 00:21:51.328 { 00:21:51.328 "code": -5, 00:21:51.328 "message": "Input/output error" 00:21:51.328 } 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:51.328 14:04:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:51.328 14:04:40 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:51.329 14:04:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:51.329 14:04:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.329 14:04:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.329 14:04:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.329 14:04:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:51.587 14:04:40 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:51.587 14:04:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:51.587 14:04:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.587 14:04:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:51.587 14:04:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.587 14:04:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.587 14:04:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:51.845 14:04:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:51.845 14:04:40 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:51.845 14:04:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:52.103 14:04:41 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:52.103 14:04:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:52.360 14:04:41 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:52.360 14:04:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.360 14:04:41 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:52.618 14:04:41 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:52.618 14:04:41 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.tupJwTFsYt 00:21:52.618 14:04:41 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tupJwTFsYt 00:21:52.618 14:04:41 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:52.618 14:04:41 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tupJwTFsYt 00:21:52.618 14:04:41 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:52.618 14:04:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.618 14:04:41 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:52.618 14:04:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.618 14:04:41 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tupJwTFsYt 00:21:52.618 14:04:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tupJwTFsYt 00:21:52.875 [2024-07-25 14:04:41.832317] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tupJwTFsYt': 0100660 00:21:52.876 [2024-07-25 14:04:41.832380] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:52.876 request: 00:21:52.876 { 00:21:52.876 "name": "key0", 00:21:52.876 "path": "/tmp/tmp.tupJwTFsYt", 00:21:52.876 "method": "keyring_file_add_key", 00:21:52.876 "req_id": 1 00:21:52.876 } 00:21:52.876 Got JSON-RPC error response 00:21:52.876 response: 00:21:52.876 { 00:21:52.876 "code": -1, 00:21:52.876 "message": "Operation not permitted" 00:21:52.876 } 00:21:52.876 14:04:41 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:52.876 14:04:41 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.876 14:04:41 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.876 14:04:41 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.876 14:04:41 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.tupJwTFsYt 00:21:52.876 14:04:41 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tupJwTFsYt 00:21:52.876 14:04:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tupJwTFsYt 00:21:53.134 14:04:42 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.tupJwTFsYt 00:21:53.134 14:04:42 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:53.134 14:04:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:53.134 14:04:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:53.134 14:04:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.134 14:04:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.134 14:04:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:53.392 14:04:42 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:53.392 14:04:42 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.392 14:04:42 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:53.392 14:04:42 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.392 14:04:42 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:53.392 14:04:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.392 14:04:42 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:53.392 14:04:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.392 14:04:42 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.392 14:04:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.651 [2024-07-25 14:04:42.592476] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tupJwTFsYt': No such file or directory 00:21:53.651 [2024-07-25 14:04:42.592522] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:53.651 [2024-07-25 14:04:42.592547] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:53.651 [2024-07-25 14:04:42.592555] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.651 [2024-07-25 14:04:42.592565] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:53.651 request: 00:21:53.651 { 00:21:53.651 "name": "nvme0", 00:21:53.651 "trtype": "tcp", 00:21:53.651 "traddr": "127.0.0.1", 00:21:53.651 "adrfam": "ipv4", 00:21:53.651 "trsvcid": "4420", 00:21:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:53.651 "prchk_reftag": false, 00:21:53.651 "prchk_guard": false, 00:21:53.651 "hdgst": false, 00:21:53.651 "ddgst": false, 00:21:53.651 "psk": "key0", 00:21:53.651 "method": "bdev_nvme_attach_controller", 00:21:53.651 "req_id": 1 00:21:53.651 } 00:21:53.651 Got JSON-RPC error response 00:21:53.651 response: 00:21:53.651 { 00:21:53.651 "code": -19, 00:21:53.651 "message": "No such device" 00:21:53.651 } 00:21:53.651 14:04:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:53.651 14:04:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.651 14:04:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.651 14:04:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.651 14:04:42 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:53.651 14:04:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:53.909 14:04:42 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QDeyc5uBRs 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:53.910 14:04:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:53.910 14:04:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:53.910 14:04:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:53.910 14:04:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:53.910 14:04:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:53.910 14:04:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QDeyc5uBRs 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QDeyc5uBRs 00:21:53.910 14:04:42 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.QDeyc5uBRs 00:21:53.910 14:04:42 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QDeyc5uBRs 00:21:53.910 14:04:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QDeyc5uBRs 00:21:54.168 14:04:43 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:54.168 14:04:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:54.427 nvme0n1 00:21:54.427 14:04:43 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:54.427 14:04:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:54.427 14:04:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.427 14:04:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.427 14:04:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.427 14:04:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:54.993 14:04:43 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:54.993 14:04:43 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:54.993 14:04:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:54.993 14:04:43 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:54.993 14:04:43 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:54.993 14:04:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.993 14:04:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.993 14:04:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.265 14:04:44 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:55.265 14:04:44 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:55.265 14:04:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.265 14:04:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:55.265 14:04:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.265 14:04:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.265 14:04:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.540 14:04:44 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:55.540 14:04:44 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:55.540 14:04:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:55.830 14:04:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:55.830 14:04:44 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:55.830 14:04:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.088 14:04:44 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:56.088 14:04:44 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QDeyc5uBRs 00:21:56.088 14:04:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QDeyc5uBRs 00:21:56.346 14:04:45 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P3JGMZHH44 00:21:56.346 14:04:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P3JGMZHH44 00:21:56.604 14:04:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:56.604 14:04:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:56.862 nvme0n1 00:21:56.862 14:04:45 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:56.862 14:04:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:57.119 14:04:46 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:57.119 "subsystems": [ 00:21:57.119 { 00:21:57.119 "subsystem": "keyring", 00:21:57.119 "config": [ 00:21:57.119 { 00:21:57.119 "method": "keyring_file_add_key", 00:21:57.119 "params": { 00:21:57.119 "name": "key0", 00:21:57.119 "path": "/tmp/tmp.QDeyc5uBRs" 00:21:57.119 } 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "method": "keyring_file_add_key", 00:21:57.119 "params": { 00:21:57.119 "name": "key1", 00:21:57.119 "path": "/tmp/tmp.P3JGMZHH44" 00:21:57.119 } 00:21:57.119 } 00:21:57.119 ] 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "subsystem": "iobuf", 00:21:57.119 "config": [ 00:21:57.119 { 00:21:57.119 "method": "iobuf_set_options", 00:21:57.119 "params": { 00:21:57.119 "small_pool_count": 8192, 00:21:57.119 "large_pool_count": 1024, 00:21:57.119 "small_bufsize": 8192, 00:21:57.119 "large_bufsize": 135168 00:21:57.119 } 00:21:57.119 } 00:21:57.119 ] 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "subsystem": "sock", 00:21:57.119 "config": [ 00:21:57.119 { 00:21:57.119 "method": "sock_set_default_impl", 00:21:57.119 "params": { 00:21:57.119 "impl_name": "uring" 00:21:57.119 } 00:21:57.119 }, 00:21:57.119 { 00:21:57.119 "method": "sock_impl_set_options", 00:21:57.119 "params": { 00:21:57.119 "impl_name": "ssl", 00:21:57.119 "recv_buf_size": 4096, 00:21:57.119 "send_buf_size": 4096, 00:21:57.119 "enable_recv_pipe": true, 00:21:57.119 "enable_quickack": false, 00:21:57.119 "enable_placement_id": 0, 00:21:57.119 "enable_zerocopy_send_server": true, 00:21:57.119 "enable_zerocopy_send_client": false, 00:21:57.119 "zerocopy_threshold": 0, 00:21:57.119 "tls_version": 0, 00:21:57.120 "enable_ktls": false 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "sock_impl_set_options", 00:21:57.120 "params": { 00:21:57.120 "impl_name": "posix", 00:21:57.120 "recv_buf_size": 2097152, 00:21:57.120 "send_buf_size": 2097152, 00:21:57.120 "enable_recv_pipe": true, 00:21:57.120 "enable_quickack": false, 00:21:57.120 "enable_placement_id": 0, 00:21:57.120 "enable_zerocopy_send_server": true, 00:21:57.120 "enable_zerocopy_send_client": false, 00:21:57.120 "zerocopy_threshold": 0, 00:21:57.120 "tls_version": 0, 00:21:57.120 "enable_ktls": false 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "sock_impl_set_options", 00:21:57.120 "params": { 00:21:57.120 "impl_name": "uring", 00:21:57.120 "recv_buf_size": 2097152, 00:21:57.120 "send_buf_size": 2097152, 00:21:57.120 "enable_recv_pipe": true, 00:21:57.120 "enable_quickack": false, 00:21:57.120 "enable_placement_id": 0, 00:21:57.120 "enable_zerocopy_send_server": false, 00:21:57.120 "enable_zerocopy_send_client": false, 00:21:57.120 "zerocopy_threshold": 0, 00:21:57.120 "tls_version": 0, 00:21:57.120 "enable_ktls": false 00:21:57.120 } 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "vmd", 00:21:57.120 "config": [] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "accel", 00:21:57.120 "config": [ 00:21:57.120 { 00:21:57.120 "method": "accel_set_options", 00:21:57.120 "params": { 00:21:57.120 "small_cache_size": 128, 00:21:57.120 "large_cache_size": 16, 00:21:57.120 "task_count": 2048, 00:21:57.120 "sequence_count": 2048, 00:21:57.120 "buf_count": 2048 00:21:57.120 } 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "bdev", 00:21:57.120 "config": [ 00:21:57.120 { 00:21:57.120 "method": "bdev_set_options", 00:21:57.120 "params": { 00:21:57.120 "bdev_io_pool_size": 65535, 00:21:57.120 "bdev_io_cache_size": 256, 00:21:57.120 "bdev_auto_examine": true, 00:21:57.120 "iobuf_small_cache_size": 128, 00:21:57.120 "iobuf_large_cache_size": 16 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_raid_set_options", 00:21:57.120 "params": { 00:21:57.120 "process_window_size_kb": 1024, 00:21:57.120 "process_max_bandwidth_mb_sec": 0 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_iscsi_set_options", 00:21:57.120 "params": { 00:21:57.120 "timeout_sec": 30 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_nvme_set_options", 00:21:57.120 "params": { 00:21:57.120 "action_on_timeout": "none", 00:21:57.120 "timeout_us": 0, 00:21:57.120 "timeout_admin_us": 0, 00:21:57.120 "keep_alive_timeout_ms": 10000, 00:21:57.120 "arbitration_burst": 0, 00:21:57.120 "low_priority_weight": 0, 00:21:57.120 "medium_priority_weight": 0, 00:21:57.120 "high_priority_weight": 0, 00:21:57.120 "nvme_adminq_poll_period_us": 10000, 00:21:57.120 "nvme_ioq_poll_period_us": 0, 00:21:57.120 "io_queue_requests": 512, 00:21:57.120 "delay_cmd_submit": true, 00:21:57.120 "transport_retry_count": 4, 00:21:57.120 "bdev_retry_count": 3, 00:21:57.120 "transport_ack_timeout": 0, 00:21:57.120 "ctrlr_loss_timeout_sec": 0, 00:21:57.120 "reconnect_delay_sec": 0, 00:21:57.120 "fast_io_fail_timeout_sec": 0, 00:21:57.120 "disable_auto_failback": false, 00:21:57.120 "generate_uuids": false, 00:21:57.120 "transport_tos": 0, 00:21:57.120 "nvme_error_stat": false, 00:21:57.120 "rdma_srq_size": 0, 00:21:57.120 "io_path_stat": false, 00:21:57.120 "allow_accel_sequence": false, 00:21:57.120 "rdma_max_cq_size": 0, 00:21:57.120 "rdma_cm_event_timeout_ms": 0, 00:21:57.120 "dhchap_digests": [ 00:21:57.120 "sha256", 00:21:57.120 "sha384", 00:21:57.120 "sha512" 00:21:57.120 ], 00:21:57.120 "dhchap_dhgroups": [ 00:21:57.120 "null", 00:21:57.120 "ffdhe2048", 00:21:57.120 "ffdhe3072", 00:21:57.120 "ffdhe4096", 00:21:57.120 "ffdhe6144", 00:21:57.120 "ffdhe8192" 00:21:57.120 ] 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_nvme_attach_controller", 00:21:57.120 "params": { 00:21:57.120 "name": "nvme0", 00:21:57.120 "trtype": "TCP", 00:21:57.120 "adrfam": "IPv4", 00:21:57.120 "traddr": "127.0.0.1", 00:21:57.120 "trsvcid": "4420", 00:21:57.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.120 "prchk_reftag": false, 00:21:57.120 "prchk_guard": false, 00:21:57.120 "ctrlr_loss_timeout_sec": 0, 00:21:57.120 "reconnect_delay_sec": 0, 00:21:57.120 "fast_io_fail_timeout_sec": 0, 00:21:57.120 "psk": "key0", 00:21:57.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:57.120 "hdgst": false, 00:21:57.120 "ddgst": false 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_nvme_set_hotplug", 00:21:57.120 "params": { 00:21:57.120 "period_us": 100000, 00:21:57.120 "enable": false 00:21:57.120 } 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "method": "bdev_wait_for_examine" 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }, 00:21:57.120 { 00:21:57.120 "subsystem": "nbd", 00:21:57.120 "config": [] 00:21:57.120 } 00:21:57.120 ] 00:21:57.120 }' 00:21:57.120 14:04:46 keyring_file -- keyring/file.sh@114 -- # killprocess 84598 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84598 ']' 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84598 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84598 00:21:57.120 killing process with pid 84598 00:21:57.120 Received shutdown signal, test time was about 1.000000 seconds 00:21:57.120 00:21:57.120 Latency(us) 00:21:57.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.120 =================================================================================================================== 00:21:57.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84598' 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@969 -- # kill 84598 00:21:57.120 14:04:46 keyring_file -- common/autotest_common.sh@974 -- # wait 84598 00:21:57.379 14:04:46 keyring_file -- keyring/file.sh@117 -- # bperfpid=84848 00:21:57.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:57.379 14:04:46 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84848 /var/tmp/bperf.sock 00:21:57.379 14:04:46 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:57.379 "subsystems": [ 00:21:57.379 { 00:21:57.379 "subsystem": "keyring", 00:21:57.379 "config": [ 00:21:57.379 { 00:21:57.379 "method": "keyring_file_add_key", 00:21:57.379 "params": { 00:21:57.379 "name": "key0", 00:21:57.379 "path": "/tmp/tmp.QDeyc5uBRs" 00:21:57.379 } 00:21:57.379 }, 00:21:57.379 { 00:21:57.379 "method": "keyring_file_add_key", 00:21:57.379 "params": { 00:21:57.379 "name": "key1", 00:21:57.379 "path": "/tmp/tmp.P3JGMZHH44" 00:21:57.379 } 00:21:57.379 } 00:21:57.379 ] 00:21:57.379 }, 00:21:57.379 { 00:21:57.379 "subsystem": "iobuf", 00:21:57.379 "config": [ 00:21:57.379 { 00:21:57.379 "method": "iobuf_set_options", 00:21:57.379 "params": { 00:21:57.379 "small_pool_count": 8192, 00:21:57.379 "large_pool_count": 1024, 00:21:57.379 "small_bufsize": 8192, 00:21:57.379 "large_bufsize": 135168 00:21:57.379 } 00:21:57.379 } 00:21:57.380 ] 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "subsystem": "sock", 00:21:57.380 "config": [ 00:21:57.380 { 00:21:57.380 "method": "sock_set_default_impl", 00:21:57.380 "params": { 00:21:57.380 "impl_name": "uring" 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "sock_impl_set_options", 00:21:57.380 "params": { 00:21:57.380 "impl_name": "ssl", 00:21:57.380 "recv_buf_size": 4096, 00:21:57.380 "send_buf_size": 4096, 00:21:57.380 "enable_recv_pipe": true, 00:21:57.380 "enable_quickack": false, 00:21:57.380 "enable_placement_id": 0, 00:21:57.380 "enable_zerocopy_send_server": true, 00:21:57.380 "enable_zerocopy_send_client": false, 00:21:57.380 "zerocopy_threshold": 0, 00:21:57.380 "tls_version": 0, 00:21:57.380 "enable_ktls": false 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "sock_impl_set_options", 00:21:57.380 "params": { 00:21:57.380 "impl_name": "posix", 00:21:57.380 "recv_buf_size": 2097152, 00:21:57.380 "send_buf_size": 2097152, 00:21:57.380 "enable_recv_pipe": true, 00:21:57.380 "enable_quickack": false, 00:21:57.380 "enable_placement_id": 0, 00:21:57.380 "enable_zerocopy_send_server": true, 00:21:57.380 "enable_zerocopy_send_client": false, 00:21:57.380 "zerocopy_threshold": 0, 00:21:57.380 "tls_version": 0, 00:21:57.380 "enable_ktls": false 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "sock_impl_set_options", 00:21:57.380 "params": { 00:21:57.380 "impl_name": "uring", 00:21:57.380 "recv_buf_size": 2097152, 00:21:57.380 "send_buf_size": 2097152, 00:21:57.380 "enable_recv_pipe": true, 00:21:57.380 "enable_quickack": false, 00:21:57.380 "enable_placement_id": 0, 00:21:57.380 "enable_zerocopy_send_server": false, 00:21:57.380 "enable_zerocopy_send_client": false, 00:21:57.380 "zerocopy_threshold": 0, 00:21:57.380 "tls_version": 0, 00:21:57.380 "enable_ktls": false 00:21:57.380 } 00:21:57.380 } 00:21:57.380 ] 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "subsystem": "vmd", 00:21:57.380 "config": [] 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "subsystem": "accel", 00:21:57.380 "config": [ 00:21:57.380 { 00:21:57.380 "method": "accel_set_options", 00:21:57.380 "params": { 00:21:57.380 "small_cache_size": 128, 00:21:57.380 "large_cache_size": 16, 00:21:57.380 "task_count": 2048, 00:21:57.380 "sequence_count": 2048, 00:21:57.380 "buf_count": 2048 00:21:57.380 } 00:21:57.380 } 00:21:57.380 ] 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "subsystem": "bdev", 00:21:57.380 "config": [ 00:21:57.380 { 00:21:57.380 "method": "bdev_set_options", 00:21:57.380 "params": { 00:21:57.380 "bdev_io_pool_size": 65535, 00:21:57.380 "bdev_io_cache_size": 256, 00:21:57.380 "bdev_auto_examine": true, 00:21:57.380 "iobuf_small_cache_size": 128, 00:21:57.380 "iobuf_large_cache_size": 16 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "bdev_raid_set_options", 00:21:57.380 "params": { 00:21:57.380 "process_window_size_kb": 1024, 00:21:57.380 "process_max_bandwidth_mb_sec": 0 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "bdev_iscsi_set_options", 00:21:57.380 "params": { 00:21:57.380 "timeout_sec": 30 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "bdev_nvme_set_options", 00:21:57.380 "params": { 00:21:57.380 "action_on_timeout": "none", 00:21:57.380 "timeout_us": 0, 00:21:57.380 "timeout_admin_us": 0, 00:21:57.380 "keep_alive_timeout_ms": 10000, 00:21:57.380 "arbitration_burst": 0, 00:21:57.380 "low_priority_weight": 0, 00:21:57.380 "medium_priority_weight": 0, 00:21:57.380 "high_priority_weight": 0, 00:21:57.380 "nvme_adminq_poll_period_us": 10000, 00:21:57.380 "nvme_ioq_poll_period_us": 0, 00:21:57.380 "io_queue_requests": 512, 00:21:57.380 "delay_cmd_submit": true, 00:21:57.380 "transport_retry_count": 4, 00:21:57.380 "bdev_retry_count": 3, 00:21:57.380 "transport_ack_timeout": 0, 00:21:57.380 "ctrlr_loss_timeout_sec": 0, 00:21:57.380 "reconnect_delay_sec": 0, 00:21:57.380 "fast_io_fail_timeout_sec": 0, 00:21:57.380 "disable_auto_failback": false, 00:21:57.380 "generate_uuids": false, 00:21:57.380 "transport_tos": 0, 00:21:57.380 "nvme_error_stat": false, 00:21:57.380 "rdma_srq_size": 0, 00:21:57.380 "io_path_stat": false, 00:21:57.380 "allow_accel_sequence": false, 00:21:57.380 "rdma_max_cq_size": 0, 00:21:57.380 "rdma_cm_event_timeout_ms": 0, 00:21:57.380 "dhchap_digests": [ 00:21:57.380 "sha256", 00:21:57.380 "sha384", 00:21:57.380 "sha512" 00:21:57.380 ], 00:21:57.380 "dhchap_dhgroups": [ 00:21:57.380 "null", 00:21:57.380 "ffdhe2048", 00:21:57.380 "ffdhe3072", 00:21:57.380 "ffdhe4096", 00:21:57.380 "ffdhe6144", 00:21:57.380 "ffdhe8192" 00:21:57.380 ] 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "bdev_nvme_attach_controller", 00:21:57.380 "params": { 00:21:57.380 "name": "nvme0", 00:21:57.380 "trtype": "TCP", 00:21:57.380 "adrfam": "IPv4", 00:21:57.380 "traddr": "127.0.0.1", 00:21:57.380 "trsvcid": "4420", 00:21:57.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.380 "prchk_reftag": false, 00:21:57.380 "prchk_guard": false, 00:21:57.380 "ctrlr_loss_timeout_sec": 0, 00:21:57.380 "reconnect_delay_sec": 0, 00:21:57.380 "fast_io_fail_timeout_sec": 0, 00:21:57.380 "psk": "key0", 00:21:57.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:57.380 "hdgst": false, 00:21:57.380 "ddgst": false 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "bdev_nvme_set_hotplug", 00:21:57.380 "params": { 00:21:57.380 "period_us": 100000, 00:21:57.380 "enable": false 00:21:57.380 } 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "method": "bdev_wait_for_examine" 00:21:57.380 } 00:21:57.380 ] 00:21:57.380 }, 00:21:57.380 { 00:21:57.380 "subsystem": "nbd", 00:21:57.380 "config": [] 00:21:57.380 } 00:21:57.380 ] 00:21:57.380 }' 00:21:57.380 14:04:46 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84848 ']' 00:21:57.380 14:04:46 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:57.380 14:04:46 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:57.380 14:04:46 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.380 14:04:46 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:57.380 14:04:46 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.380 14:04:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:57.380 [2024-07-25 14:04:46.313558] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:21:57.380 [2024-07-25 14:04:46.313806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84848 ] 00:21:57.638 [2024-07-25 14:04:46.449605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.638 [2024-07-25 14:04:46.555472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.896 [2024-07-25 14:04:46.689865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:57.896 [2024-07-25 14:04:46.744490] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.463 14:04:47 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.463 14:04:47 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:58.463 14:04:47 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:58.463 14:04:47 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:58.463 14:04:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.721 14:04:47 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:58.721 14:04:47 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:58.721 14:04:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:58.721 14:04:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.721 14:04:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.721 14:04:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.721 14:04:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:58.979 14:04:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:58.979 14:04:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:58.979 14:04:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:58.979 14:04:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.979 14:04:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.979 14:04:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.979 14:04:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:59.238 14:04:48 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:59.238 14:04:48 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:59.238 14:04:48 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:59.238 14:04:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:59.496 14:04:48 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:59.496 14:04:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:59.496 14:04:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.QDeyc5uBRs /tmp/tmp.P3JGMZHH44 00:21:59.496 14:04:48 keyring_file -- keyring/file.sh@20 -- # killprocess 84848 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84848 ']' 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84848 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84848 00:21:59.496 killing process with pid 84848 00:21:59.496 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.496 00:21:59.496 Latency(us) 00:21:59.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.496 =================================================================================================================== 00:21:59.496 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84848' 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@969 -- # kill 84848 00:21:59.496 14:04:48 keyring_file -- common/autotest_common.sh@974 -- # wait 84848 00:22:00.065 14:04:48 keyring_file -- keyring/file.sh@21 -- # killprocess 84585 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84585 ']' 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84585 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@955 -- # uname 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84585 00:22:00.065 killing process with pid 84585 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84585' 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@969 -- # kill 84585 00:22:00.065 [2024-07-25 14:04:48.845446] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:00.065 14:04:48 keyring_file -- common/autotest_common.sh@974 -- # wait 84585 00:22:00.337 00:22:00.337 real 0m15.838s 00:22:00.337 user 0m39.311s 00:22:00.337 sys 0m3.034s 00:22:00.337 ************************************ 00:22:00.337 END TEST keyring_file 00:22:00.337 ************************************ 00:22:00.337 14:04:49 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:00.337 14:04:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:00.337 14:04:49 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:22:00.337 14:04:49 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:00.337 14:04:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:00.337 14:04:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:00.337 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:22:00.337 ************************************ 00:22:00.337 START TEST keyring_linux 00:22:00.337 ************************************ 00:22:00.337 14:04:49 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:00.596 * Looking for test storage... 00:22:00.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:00.596 14:04:49 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:00.596 14:04:49 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71427938-e211-49fa-b6ad-486cdab0bd89 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=71427938-e211-49fa-b6ad-486cdab0bd89 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.596 14:04:49 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:00.596 14:04:49 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.596 14:04:49 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.596 14:04:49 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.597 14:04:49 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.597 14:04:49 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.597 14:04:49 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.597 14:04:49 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:00.597 14:04:49 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:00.597 /tmp/:spdk-test:key0 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:00.597 14:04:49 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:00.597 /tmp/:spdk-test:key1 00:22:00.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.597 14:04:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84965 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.597 14:04:49 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84965 00:22:00.597 14:04:49 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84965 ']' 00:22:00.597 14:04:49 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.597 14:04:49 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.597 14:04:49 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.597 14:04:49 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.597 14:04:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:00.597 [2024-07-25 14:04:49.572505] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:22:00.597 [2024-07-25 14:04:49.573134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84965 ] 00:22:00.856 [2024-07-25 14:04:49.711390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.856 [2024-07-25 14:04:49.821614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.856 [2024-07-25 14:04:49.876728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:01.791 14:04:50 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:01.791 [2024-07-25 14:04:50.581393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.791 null0 00:22:01.791 [2024-07-25 14:04:50.613354] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.791 [2024-07-25 14:04:50.613610] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.791 14:04:50 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:01.791 241161430 00:22:01.791 14:04:50 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:01.791 968722083 00:22:01.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:01.791 14:04:50 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84983 00:22:01.791 14:04:50 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:01.791 14:04:50 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84983 /var/tmp/bperf.sock 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84983 ']' 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.791 14:04:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:01.791 [2024-07-25 14:04:50.691531] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:22:01.791 [2024-07-25 14:04:50.691835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84983 ] 00:22:02.049 [2024-07-25 14:04:50.826909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.049 [2024-07-25 14:04:50.962161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.982 14:04:51 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.982 14:04:51 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:02.982 14:04:51 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:02.982 14:04:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:02.982 14:04:51 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:02.982 14:04:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:03.549 [2024-07-25 14:04:52.314417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:03.549 14:04:52 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:03.549 14:04:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:03.830 [2024-07-25 14:04:52.607277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.830 nvme0n1 00:22:03.830 14:04:52 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:03.830 14:04:52 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:03.830 14:04:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:03.830 14:04:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:03.830 14:04:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:03.830 14:04:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.088 14:04:52 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:04.088 14:04:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:04.088 14:04:52 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:04.088 14:04:52 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:04.088 14:04:52 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.088 14:04:52 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:04.088 14:04:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.348 14:04:53 keyring_linux -- keyring/linux.sh@25 -- # sn=241161430 00:22:04.348 14:04:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:04.348 14:04:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:04.348 14:04:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 241161430 == \2\4\1\1\6\1\4\3\0 ]] 00:22:04.349 14:04:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 241161430 00:22:04.349 14:04:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:04.349 14:04:53 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:04.619 Running I/O for 1 seconds... 00:22:05.554 00:22:05.554 Latency(us) 00:22:05.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.554 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:05.554 nvme0n1 : 1.01 10978.99 42.89 0.00 0.00 11590.84 8519.68 20852.36 00:22:05.554 =================================================================================================================== 00:22:05.554 Total : 10978.99 42.89 0.00 0.00 11590.84 8519.68 20852.36 00:22:05.554 0 00:22:05.554 14:04:54 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:05.554 14:04:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:05.812 14:04:54 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:05.812 14:04:54 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:05.812 14:04:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:05.812 14:04:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:05.812 14:04:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:05.812 14:04:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.069 14:04:54 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:06.069 14:04:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:06.069 14:04:54 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:06.069 14:04:54 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.069 14:04:54 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:22:06.069 14:04:54 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.069 14:04:54 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:06.069 14:04:54 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.069 14:04:54 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:06.069 14:04:54 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.069 14:04:54 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.069 14:04:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.326 [2024-07-25 14:04:55.229909] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:06.326 [2024-07-25 14:04:55.230652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87460 (107): Transport endpoint is not connected 00:22:06.326 [2024-07-25 14:04:55.231623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87460 (9): Bad file descriptor 00:22:06.326 [2024-07-25 14:04:55.232619] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.326 [2024-07-25 14:04:55.232645] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:06.327 [2024-07-25 14:04:55.232658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.327 request: 00:22:06.327 { 00:22:06.327 "name": "nvme0", 00:22:06.327 "trtype": "tcp", 00:22:06.327 "traddr": "127.0.0.1", 00:22:06.327 "adrfam": "ipv4", 00:22:06.327 "trsvcid": "4420", 00:22:06.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:06.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:06.327 "prchk_reftag": false, 00:22:06.327 "prchk_guard": false, 00:22:06.327 "hdgst": false, 00:22:06.327 "ddgst": false, 00:22:06.327 "psk": ":spdk-test:key1", 00:22:06.327 "method": "bdev_nvme_attach_controller", 00:22:06.327 "req_id": 1 00:22:06.327 } 00:22:06.327 Got JSON-RPC error response 00:22:06.327 response: 00:22:06.327 { 00:22:06.327 "code": -5, 00:22:06.327 "message": "Input/output error" 00:22:06.327 } 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@33 -- # sn=241161430 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 241161430 00:22:06.327 1 links removed 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@33 -- # sn=968722083 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 968722083 00:22:06.327 1 links removed 00:22:06.327 14:04:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84983 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84983 ']' 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84983 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84983 00:22:06.327 killing process with pid 84983 00:22:06.327 Received shutdown signal, test time was about 1.000000 seconds 00:22:06.327 00:22:06.327 Latency(us) 00:22:06.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.327 =================================================================================================================== 00:22:06.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84983' 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@969 -- # kill 84983 00:22:06.327 14:04:55 keyring_linux -- common/autotest_common.sh@974 -- # wait 84983 00:22:06.584 14:04:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84965 00:22:06.584 14:04:55 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84965 ']' 00:22:06.584 14:04:55 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84965 00:22:06.584 14:04:55 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:06.584 14:04:55 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.584 14:04:55 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84965 00:22:06.843 killing process with pid 84965 00:22:06.843 14:04:55 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.843 14:04:55 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.843 14:04:55 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84965' 00:22:06.843 14:04:55 keyring_linux -- common/autotest_common.sh@969 -- # kill 84965 00:22:06.843 14:04:55 keyring_linux -- common/autotest_common.sh@974 -- # wait 84965 00:22:07.100 00:22:07.100 real 0m6.728s 00:22:07.100 user 0m13.095s 00:22:07.101 sys 0m1.710s 00:22:07.101 14:04:56 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.101 ************************************ 00:22:07.101 END TEST keyring_linux 00:22:07.101 ************************************ 00:22:07.101 14:04:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:07.101 14:04:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:22:07.101 14:04:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:07.101 14:04:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:07.101 14:04:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:07.101 14:04:56 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:22:07.101 14:04:56 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:22:07.101 14:04:56 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:22:07.101 14:04:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.101 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:22:07.101 14:04:56 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:22:07.101 14:04:56 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:07.101 14:04:56 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:07.101 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:22:08.999 INFO: APP EXITING 00:22:08.999 INFO: killing all VMs 00:22:08.999 INFO: killing vhost app 00:22:08.999 INFO: EXIT DONE 00:22:09.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:09.565 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:09.565 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:10.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:10.130 Cleaning 00:22:10.130 Removing: /var/run/dpdk/spdk0/config 00:22:10.130 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:10.130 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:10.130 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:10.130 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:10.130 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:10.130 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:10.130 Removing: /var/run/dpdk/spdk1/config 00:22:10.130 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:10.130 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:10.130 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:10.130 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:10.130 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:10.130 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:10.130 Removing: /var/run/dpdk/spdk2/config 00:22:10.130 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:10.130 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:10.130 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:10.130 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:10.130 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:10.130 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:10.130 Removing: /var/run/dpdk/spdk3/config 00:22:10.130 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:10.130 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:10.130 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:10.130 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:10.130 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:10.389 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:10.389 Removing: /var/run/dpdk/spdk4/config 00:22:10.389 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:10.389 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:10.389 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:10.389 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:10.389 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:10.389 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:10.389 Removing: /dev/shm/nvmf_trace.0 00:22:10.389 Removing: /dev/shm/spdk_tgt_trace.pid58751 00:22:10.389 Removing: /var/run/dpdk/spdk0 00:22:10.389 Removing: /var/run/dpdk/spdk1 00:22:10.389 Removing: /var/run/dpdk/spdk2 00:22:10.389 Removing: /var/run/dpdk/spdk3 00:22:10.389 Removing: /var/run/dpdk/spdk4 00:22:10.389 Removing: /var/run/dpdk/spdk_pid58606 00:22:10.389 Removing: /var/run/dpdk/spdk_pid58751 00:22:10.389 Removing: /var/run/dpdk/spdk_pid58944 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59036 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59058 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59173 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59191 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59309 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59500 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59635 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59705 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59781 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59868 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59943 00:22:10.389 Removing: /var/run/dpdk/spdk_pid59981 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60012 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60074 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60157 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60595 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60647 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60698 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60714 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60781 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60797 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60864 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60880 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60931 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60949 00:22:10.389 Removing: /var/run/dpdk/spdk_pid60989 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61007 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61130 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61165 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61234 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61544 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61562 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61593 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61612 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61627 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61652 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61665 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61681 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61704 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61719 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61734 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61759 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61772 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61788 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61812 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61826 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61847 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61866 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61885 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61895 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61931 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61950 00:22:10.389 Removing: /var/run/dpdk/spdk_pid61984 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62038 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62072 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62089 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62112 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62127 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62129 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62178 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62197 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62220 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62235 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62245 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62254 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62269 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62273 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62288 00:22:10.389 Removing: /var/run/dpdk/spdk_pid62300 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62328 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62360 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62364 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62398 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62408 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62415 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62461 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62473 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62499 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62508 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62521 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62528 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62536 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62543 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62551 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62564 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62632 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62683 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62790 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62829 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62874 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62894 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62905 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62925 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62962 00:22:10.647 Removing: /var/run/dpdk/spdk_pid62983 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63053 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63069 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63113 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63186 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63244 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63273 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63364 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63407 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63445 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63669 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63761 00:22:10.647 Removing: /var/run/dpdk/spdk_pid63795 00:22:10.647 Removing: /var/run/dpdk/spdk_pid64128 00:22:10.647 Removing: /var/run/dpdk/spdk_pid64166 00:22:10.647 Removing: /var/run/dpdk/spdk_pid64463 00:22:10.647 Removing: /var/run/dpdk/spdk_pid64867 00:22:10.647 Removing: /var/run/dpdk/spdk_pid65150 00:22:10.647 Removing: /var/run/dpdk/spdk_pid65939 00:22:10.647 Removing: /var/run/dpdk/spdk_pid66793 00:22:10.647 Removing: /var/run/dpdk/spdk_pid66909 00:22:10.647 Removing: /var/run/dpdk/spdk_pid66977 00:22:10.647 Removing: /var/run/dpdk/spdk_pid68237 00:22:10.647 Removing: /var/run/dpdk/spdk_pid68492 00:22:10.647 Removing: /var/run/dpdk/spdk_pid71915 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72227 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72335 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72469 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72501 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72524 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72552 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72648 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72788 00:22:10.647 Removing: /var/run/dpdk/spdk_pid72938 00:22:10.647 Removing: /var/run/dpdk/spdk_pid73023 00:22:10.647 Removing: /var/run/dpdk/spdk_pid73218 00:22:10.647 Removing: /var/run/dpdk/spdk_pid73301 00:22:10.647 Removing: /var/run/dpdk/spdk_pid73394 00:22:10.647 Removing: /var/run/dpdk/spdk_pid73697 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74108 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74116 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74386 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74410 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74424 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74456 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74461 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74762 00:22:10.647 Removing: /var/run/dpdk/spdk_pid74805 00:22:10.647 Removing: /var/run/dpdk/spdk_pid75091 00:22:10.647 Removing: /var/run/dpdk/spdk_pid75288 00:22:10.647 Removing: /var/run/dpdk/spdk_pid75674 00:22:10.648 Removing: /var/run/dpdk/spdk_pid76186 00:22:10.648 Removing: /var/run/dpdk/spdk_pid77021 00:22:10.648 Removing: /var/run/dpdk/spdk_pid77610 00:22:10.648 Removing: /var/run/dpdk/spdk_pid77612 00:22:10.648 Removing: /var/run/dpdk/spdk_pid79526 00:22:10.648 Removing: /var/run/dpdk/spdk_pid79579 00:22:10.648 Removing: /var/run/dpdk/spdk_pid79638 00:22:10.648 Removing: /var/run/dpdk/spdk_pid79700 00:22:10.648 Removing: /var/run/dpdk/spdk_pid79822 00:22:10.648 Removing: /var/run/dpdk/spdk_pid79882 00:22:10.648 Removing: /var/run/dpdk/spdk_pid79941 00:22:10.648 Removing: /var/run/dpdk/spdk_pid80003 00:22:10.648 Removing: /var/run/dpdk/spdk_pid80332 00:22:10.648 Removing: /var/run/dpdk/spdk_pid81495 00:22:10.648 Removing: /var/run/dpdk/spdk_pid81633 00:22:10.648 Removing: /var/run/dpdk/spdk_pid81876 00:22:10.906 Removing: /var/run/dpdk/spdk_pid82416 00:22:10.906 Removing: /var/run/dpdk/spdk_pid82575 00:22:10.906 Removing: /var/run/dpdk/spdk_pid82732 00:22:10.906 Removing: /var/run/dpdk/spdk_pid82823 00:22:10.906 Removing: /var/run/dpdk/spdk_pid83001 00:22:10.906 Removing: /var/run/dpdk/spdk_pid83111 00:22:10.906 Removing: /var/run/dpdk/spdk_pid83770 00:22:10.906 Removing: /var/run/dpdk/spdk_pid83805 00:22:10.906 Removing: /var/run/dpdk/spdk_pid83839 00:22:10.906 Removing: /var/run/dpdk/spdk_pid84088 00:22:10.906 Removing: /var/run/dpdk/spdk_pid84124 00:22:10.906 Removing: /var/run/dpdk/spdk_pid84158 00:22:10.906 Removing: /var/run/dpdk/spdk_pid84585 00:22:10.906 Removing: /var/run/dpdk/spdk_pid84598 00:22:10.906 Removing: /var/run/dpdk/spdk_pid84848 00:22:10.906 Removing: /var/run/dpdk/spdk_pid84965 00:22:10.906 Removing: /var/run/dpdk/spdk_pid84983 00:22:10.906 Clean 00:22:10.906 14:04:59 -- common/autotest_common.sh@1451 -- # return 0 00:22:10.906 14:04:59 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:22:10.906 14:04:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.906 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:22:10.906 14:04:59 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:22:10.906 14:04:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.906 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:22:10.906 14:04:59 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:10.906 14:04:59 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:10.906 14:04:59 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:10.906 14:04:59 -- spdk/autotest.sh@395 -- # hash lcov 00:22:10.906 14:04:59 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:10.906 14:04:59 -- spdk/autotest.sh@397 -- # hostname 00:22:10.906 14:04:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:11.164 geninfo: WARNING: invalid characters removed from testname! 00:22:37.760 14:05:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:39.661 14:05:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:42.191 14:05:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:45.474 14:05:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:48.021 14:05:36 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:50.550 14:05:39 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:53.082 14:05:42 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:53.340 14:05:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:53.340 14:05:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:53.341 14:05:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.341 14:05:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.341 14:05:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.341 14:05:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.341 14:05:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.341 14:05:42 -- paths/export.sh@5 -- $ export PATH 00:22:53.341 14:05:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.341 14:05:42 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:53.341 14:05:42 -- common/autobuild_common.sh@447 -- $ date +%s 00:22:53.341 14:05:42 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721916342.XXXXXX 00:22:53.341 14:05:42 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721916342.VQl0ut 00:22:53.341 14:05:42 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:22:53.341 14:05:42 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:22:53.341 14:05:42 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:53.341 14:05:42 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:53.341 14:05:42 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:53.341 14:05:42 -- common/autobuild_common.sh@463 -- $ get_config_params 00:22:53.341 14:05:42 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:22:53.341 14:05:42 -- common/autotest_common.sh@10 -- $ set +x 00:22:53.341 14:05:42 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:53.341 14:05:42 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:22:53.341 14:05:42 -- pm/common@17 -- $ local monitor 00:22:53.341 14:05:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:53.341 14:05:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:53.341 14:05:42 -- pm/common@25 -- $ sleep 1 00:22:53.341 14:05:42 -- pm/common@21 -- $ date +%s 00:22:53.341 14:05:42 -- pm/common@21 -- $ date +%s 00:22:53.341 14:05:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721916342 00:22:53.341 14:05:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721916342 00:22:53.341 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721916342_collect-vmstat.pm.log 00:22:53.341 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721916342_collect-cpu-load.pm.log 00:22:54.276 14:05:43 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:22:54.276 14:05:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:54.276 14:05:43 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:54.276 14:05:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:54.276 14:05:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:54.276 14:05:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:54.276 14:05:43 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:54.276 14:05:43 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:54.276 14:05:43 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:54.276 14:05:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:54.276 14:05:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:54.276 14:05:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:54.276 14:05:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:54.276 14:05:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:54.276 14:05:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:54.276 14:05:43 -- pm/common@44 -- $ pid=86712 00:22:54.276 14:05:43 -- pm/common@50 -- $ kill -TERM 86712 00:22:54.276 14:05:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:54.276 14:05:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:54.276 14:05:43 -- pm/common@44 -- $ pid=86713 00:22:54.276 14:05:43 -- pm/common@50 -- $ kill -TERM 86713 00:22:54.276 + [[ -n 5101 ]] 00:22:54.276 + sudo kill 5101 00:22:54.286 [Pipeline] } 00:22:54.305 [Pipeline] // timeout 00:22:54.310 [Pipeline] } 00:22:54.327 [Pipeline] // stage 00:22:54.332 [Pipeline] } 00:22:54.349 [Pipeline] // catchError 00:22:54.357 [Pipeline] stage 00:22:54.359 [Pipeline] { (Stop VM) 00:22:54.373 [Pipeline] sh 00:22:54.650 + vagrant halt 00:22:58.839 ==> default: Halting domain... 00:23:04.123 [Pipeline] sh 00:23:04.403 + vagrant destroy -f 00:23:07.685 ==> default: Removing domain... 00:23:07.955 [Pipeline] sh 00:23:08.233 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:23:08.244 [Pipeline] } 00:23:08.265 [Pipeline] // stage 00:23:08.270 [Pipeline] } 00:23:08.284 [Pipeline] // dir 00:23:08.290 [Pipeline] } 00:23:08.304 [Pipeline] // wrap 00:23:08.310 [Pipeline] } 00:23:08.323 [Pipeline] // catchError 00:23:08.334 [Pipeline] stage 00:23:08.337 [Pipeline] { (Epilogue) 00:23:08.361 [Pipeline] sh 00:23:08.638 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:15.198 [Pipeline] catchError 00:23:15.200 [Pipeline] { 00:23:15.215 [Pipeline] sh 00:23:15.496 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:15.496 Artifacts sizes are good 00:23:15.506 [Pipeline] } 00:23:15.522 [Pipeline] // catchError 00:23:15.531 [Pipeline] archiveArtifacts 00:23:15.537 Archiving artifacts 00:23:15.707 [Pipeline] cleanWs 00:23:15.718 [WS-CLEANUP] Deleting project workspace... 00:23:15.718 [WS-CLEANUP] Deferred wipeout is used... 00:23:15.725 [WS-CLEANUP] done 00:23:15.726 [Pipeline] } 00:23:15.743 [Pipeline] // stage 00:23:15.749 [Pipeline] } 00:23:15.763 [Pipeline] // node 00:23:15.767 [Pipeline] End of Pipeline 00:23:15.805 Finished: SUCCESS